This application claims the benefits of priority to Chinese Patent Application No. 202211375033.5 filed with the Chinese National Intellectual Property Office on Nov. 4, 2022 and entitled “Variational Autoencoder-based Magnetic Resonance Weighted Image Synthesis Method and Device”, the entire contents of which are incorporated herein by reference.
The present application relates to the technical field of medical image processing, in particular to a variational autoencoder-based magnetic resonance weighted image synthesis method and device.
Magnetic Resonance Imaging (MRI) is a non-invasive and ionizing radiation-free medical imaging method, which is widely used in scientific research and clinical practice.
Magnetic resonance imaging relies on the polarization of protons under a high-intensity magnetic field, and protons gradually return to an equilibrium state after being excited to be in a resonant state using radio frequency pulses. The above process is known as the relaxation process of protons. Magnetic resonance signals are electromagnetic signals generated in the relaxation process. According to the differences in parameters of an acquisition sequence, the magnetic resonance signals exhibit a weighted sum of different contrasts, including longitudinal relaxation parameter T1 contrast weighting, transverse relaxation parameter T2 contrast weighting, proton density PD contrast weighting, etc. Therefore, magnetic resonance imaging may obtain different contrast-weighted images by varying the parameters of the acquisition sequence, and the above different contrast-weighted images may reflect different tissue properties. Therefore, in an actual clinical examination process, a variety of magnetic resonance weighted images with different contrasts are often acquired, which results in the consumption of a lot of time for magnetic resonance examination and a heavy medical resource stress.
A quantitative magnetic resonance imaging method, which has evolved rapidly in recent years, offers a new idea to solve the above problems. The quantitative magnetic resonance imaging method acquires magnetic resonance quantitative parametric images of tissues, and the acquired magnetic resonance quantitative parametric images can be used to describe quantitative properties of the tissues. Corresponding magnetic resonance signals can be synthesized with the magnetic resonance quantitative parameters by setting appropriate acquisition parameters according to a magnetic resonance signal formula. In principle, a magnetic resonance weighted image with any contrast can be obtained. However, a magnetic resonance weighted image obtained by the synthesis method based on the magnetic resonance signal equation has certain limitations compared to actually acquired magnetic resonance weighted image due to errors in the measurement of magnetic resonance quantitative tissue parameters. In addition, studies have shown that T2FLAIR images synthesized via magnetic resonance quantitative parametric images cannot achieve the complete cerebrospinal fluid suppression effect.
The problem present in the synthesis process using the above formula is expected to be solved with deep learning methods. Recent studies use generative adversarial networks to achieve the synthesis of magnetic resonance weighted images. A better synthesis result can be achieved by taking the acquired magnetic resonance quantitative parametric images as the input of a generator, and taking the actually acquired magnetic resonance weighted images as labels for the training process of the generator, in cooperation with a discriminator performing adversarial training. However, the above method also has certain limitations, the deep learning method can only be used to synthesize magnetic resonance weighted images with contrasts existing in the training data due to limitations to the contrasts of the actually acquired magnetic resonance weighted images in the training data, which greatly limits the range of application of the magnetic resonance quantitative parametric images in synthesis of magnetic resonance weighted images with different contrasts.
To this end, a variational autoencoder-based magnetic resonance weighted image synthesis method and device are provided to solve the above technical problems.
The present application provides a variational autoencoder-based magnetic resonance weighted image synthesis method and device in order to solve the above technical problems.
The technical solutions adopted in the present application are as follows.
A variational autoencoder-based magnetic resonance weighted image synthesis method includes the following steps:
Further, the real magnetic resonance weighted image and the magnetic resonance quantitative parametric image in the step S1 are generated by performing a preset scanning sequence via the magnetic resonance scanner.
Further, the magnetic resonance quantitative parametric image is composed of a T1 quantitative image, a T2 quantitative image and a proton density quantitative image.
Further, the real magnetic resonance weighted image includes at least any one of the following: a T1-weighted conventional image, a T2-weighted conventional image, a proton density-weighted image, a T1-weighted Flair image and/or a T2-weighted Flair image.
Further, step S3 specifically includes the following sub-steps:
Further, step S4 specifically includes the following sub-steps:
Further, the method of combining in the step S46 includes: splicing the contrast encoding knowledge matrix with the magnetic resonance quantitative parametric image in the training set, or splicing the contrast encoding knowledge matrix with the magnetic resonance quantitative parametric image in the training set after passing through the plurality of three-dimensional convolutional layers, or adding the contrast encoding knowledge matrix with the magnetic resonance quantitative parametric image in the training set.
Further, the real magnetic resonance weighted image with the corresponding contrast in the training set used to calculate the loss function in the step S47 has the same contrast as the input of the real magnetic resonance image and/or the first magnetic resonance weighted image in the step S43, and has the same individual as the magnetic resonance quantitative parametric image in the training set in the step S46.
Further, the training loss function of the pre-trained variational autoencoder model in the step S4 is:
wherein σ and μ are the mean value and the variance in normal distribution of the hidden layer variable output by the encoder, μ′ is an output result of the decoder, xi is the second magnetic resonance weighted image with the corresponding contrast, i is an input sample, j is an input sample used to extract contrast encoding information, and n and d are the corresponding amount of samples input when the loss function is calculated once.
The present application further provides a variational autoencoder-based magnetic resonance weighted image synthesis device, including a memory and one or more processors, wherein the memory stores executable codes therein, and the one or more processors, when executing the executable codes, are configured to implement the variational autoencoder-based magnetic resonance weighted image synthesis method in any one of the above embodiments.
The present application further provides a computer-readable storage medium storing a program thereon, wherein the program, when executed by a processor, implements the variational autoencoder-based magnetic resonance weighted image synthesis method in any one of the above embodiments.
The present application has the following beneficial effects:
The following description of at least one exemplary embodiment is merely illustrative in practice and in no way serves as any limitation on the present application and its application or uses. Based on the embodiments of the present application, other embodiments obtained by those of ordinary skill in the art without creative work all fall within the scope of protection of the present application.
Referring to
The real magnetic resonance weighted image and the magnetic resonance quantitative parametric image are generated by performing a preset scanning sequence via the magnetic resonance scanner. The magnetic resonance quantitative parametric image is composed of a T1 quantitative image, a T2 quantitative image and a proton density quantitative image.
The real magnetic resonance weighted image includes at least any one of a T1-weighted conventional image, a T2-weighted conventional image, a proton density-weighted image, a T1-weighted Flair image, and/or a T2-weighted Flair image.
The method of combining includes: splicing the contrast encoding knowledge matrix with the magnetic resonance quantitative parametric image in the training set, or splicing the contrast encoding knowledge matrix with the magnetic resonance quantitative parametric image in the training set after passing through the plurality of three-dimensional convolutional layers, or adding the contrast encoding knowledge matrix with the magnetic resonance quantitative parametric image in the training set.
The real magnetic resonance weighted image with the corresponding contrast in the training set used to calculate the loss function in the step S47 has the same contrast as the input of the real magnetic resonance image and/or the first magnetic resonance weighted image in the step S43, and has the same individual as the magnetic resonance quantitative parametric image in the training set in the step S46.
The training loss function of the pre-trained variational autoencoder model is:
wherein σ and μ are the mean value and the variance in normal distribution of the hidden layer variable output by the encoder, μ′ is an output result of the decoder, xi is the second magnetic resonance weighted image with the corresponding contrast, i is an input sample, j is an input sample used to extract contrast encoding information, and n and d are the corresponding amount of samples input when the loss function is calculated once.
Referring to
The real magnetic resonance weighted image and the magnetic resonance quantitative parametric image are generated by performing a preset scanning sequence via the magnetic resonance scanner.
The magnetic resonance quantitative parametric image is composed of a T1 quantitative image, a T2 quantitative image and a proton density quantitative image.
The real magnetic resonance weighted image includes at least any one of a T1-weighted conventional image, a T2-weighted conventional image, a proton density-weighted image, a T1-weighted Flair image, and/or a T2-weighted Flair image.
The magnetic resonance quantitative parametric image and the real magnetic resonance weighted image are acquired by performing specific scanning sequence via the magnetic resonance scanner. The magnetic resonance quantitative parametric image can be acquired by employing a plurality of scanning sequences, for example, when the T1 quantitative image is acquired, an inversion recovery sequence at multiple inversion times, for example, an MP2RAGE sequence, may be employed, and a corresponding Ti quantitative image may be calculated using the corresponding relation between signal values in the acquired real magnetic resonance weighted image and the acquisition parameter (inversion time). When the T2 quantitative image is acquired, a spin echo sequence at multiple echo times may be employed, a corresponding T2 quantitative image may be calculated using the corresponding relation between signal values and the acquisition parameter (echo time) in the acquired real magnetic resonance weighted image. A variety of magnetic resonance quantitative parametric images can also be obtained in single scanning by novel quantitative magnetic resonance imaging sequences, including an MDME (Multiple Dynamic Multiple Echo) sequence and an MRF (Magnetic Resonance Fingerprinting) sequence, and a plurality of magnetic resonance quantitative parametric images can be obtained simultaneously by a corresponding sequence-specific reconstruction method, which will not be described in detail. In this embodiment, the magnetic resonance quantitative parametric image is obtained by the magnetic resonance fingerprinting, MRF, sequence. For the method involved in the present application, the specific manner of acquiring the magnetic resonance quantitative parametric image does not affect all subsequent steps of the method involved in the present application, and therefore is only one specific example of the present application and does not limit the use of other methods in other embodiments to acquire the magnetic resonance quantitative parametric image. The real magnetic resonance weighted image may be obtained by employing a particular scanning sequence and scanning parameters, and when different scanning sequences are selected or different scanning parameters are set, the real magnetic resonance weighted images with different contrasts may be obtained. In this embodiment, the real magnetic resonance weighted images with different contrasts are obtained by controlling the repetition time, echo time and inversion time. The number of types of acquired real magnetic resonance weighted image contrasts is greater than 5 in order to guarantee subsequent training effects and to take into account the efficiency. The magnetic resonance quantitative parametric image and the real magnetic resonance weighted image acquired in this embodiment belong to the same individual and the number of individuals is greater than 10.
When the images are the T1-weighted conventional image, the T2-weighted conventional image, and the proton density-weighted image, the first magnetic resonance weighted image is synthesized by formula I as follows:
wherein T1, T2 and PD are corresponding quantitative values in the T1 quantitative image, the T2 quantitative image and the proton density quantitative image, respectively; TR is the assumed repetition time at the time of image signal synthesis; and TE is the assumed echo time at the time of image signal synthesis. Appropriate TR and TE parameters are selected such that the contrast conforms to the T1-weighted conventional image.
When the image is the T1-weighted Flair image or the T2-weighted Flair image or other images containing a single inversion pulse sequence, the first magnetic resonance weighted image is synthesized by formula II as follows:
wherein T1, T2 and PD are corresponding quantitative values in the T1 quantitative image, the T2 quantitative image and the proton density quantitative image, respectively; TR is the assumed repetition time at the time of image signal synthesis; TE is the assumed echo time at the time of image signal synthesis; and TI is the assumed inversion time at the time of image signal synthesis.
An activation function of the encoding activation layer is a “relu” function and a pooling function of the pooling layer is maximum pooling.
An activation function of the decoding activation layer is a “relu” function.
Assuming that one piece of low-dimensional contrast information z is present in high-dimensional magnetic resonance weighted image, and that low-dimensional contrast information can be approximately expressed by a simple multivariate normal distribution, there is:
z˜(0,I)
wherein I represents an identity matrix, and thus z is a multi-dimensional random variable subjected to standard multivariate normal distribution.
Assuming that the encoder of the conditional variational autoencoder model conforms to posterior distribution pθe(z|X) and the decoder conforms to posterior distribution pθd(X|z, Y), X represents the high-dimensional magnetic resonance weighted image, Y represents the magnetic resonance quantitative parametric image, and θe and θd represent parameters of an encoder and a decoder of a hypothetical model. Based on a variational Bayesian algorithm, the used encoder of pθe(z|X) fits the posterior distribution pθe(z|X). pθe(z|X) is the posterior distribution in an actual model.
log pθ(X|Y) is maximized in model training, which expands by utilizing the full probability theorem to yield:
log pθ(X|Y)=∫qθe(z|X)log pθd(X|z,Y)dz.
The training loss function of the pre-trained variational autoencoder model is:
wherein σ and μ are the mean value and the variance in normal distribution of the hidden layer variable output by the encoder, μ′ is an output result of the decoder, xi is the second magnetic resonance weighted image with the corresponding contrast, i is an input sample, j is an input sample used to extract contrast encoding information, and n and d are the corresponding amount of samples input when the loss function is calculated once.
The sampling formula is as follows:
z=σ+λμ
wherein σ and μ are the mean value and variance in normal distribution of the hidden layer variable output by the encoder, and λ conforms to standard normal distribution.
The above training set is used as data for model training. In the model training process, the registered real magnetic resonance image and/or first magnetic resonance weighted image with a certain contrast of a certain individual are/is randomly selected to serve as input of the encoder.
When the first magnetic resonance weighted image is selected, the first magnetic resonance weighted image is synthesized from the pre-processed magnetic resonance quantitative parametric image. Specifically, when used for the input of the encoder in the training process, the contrast of the first magnetic resonance weighted image needs to consistent with the acquired real magnetic resonance weighted image, that is, the contrast of the first magnetic resonance weighted image needs to be consistent with a certain type of the contrast of the acquired real magnetic resonance weighted image contrast.
The method of combining includes: splicing the contrast encoding knowledge matrix M with the magnetic resonance quantitative parametric images in the training set, or splicing the contrast encoding knowledge matrix M with the magnetic resonance quantitative parametric images in the training set after passing through the plurality of three-dimensional convolutional layers, or adding the contrast encoding knowledge matrix M with the magnetic resonance quantitative parametric images in the training set.
Specifically, concatenating the contrast encoding knowledge matrix M with the magnetic resonance quantitative parametric images includes the T1 quantitative image, the T2 quantitative image and the proton density quantitative image to obtain the matrix F.
The reverse propagation model is performed on the model based on the loss function, the parameters of the model are updated, an Adam optimizer is used during model training in the embodiment, and the corresponding learning rate is set to be 0.0001.
The trained conditional variational autoencoder model is loaded, and the magnetic resonance weighted image and the magnetic resonance quantitative parametric image are selected as the input of the encoder. The pre-trained variational autoencoder model uses the real magnetic resonance weight image and the first magnetic resonance weight image as training data when in training, so that both the real magnetic resonance weight image and the first magnetic resonance weight image can be selected as the input of the encoder in this step. The individuals selected here are not correlated with the second magnetic resonance weighted image output by the final model, and therefore, a target magnetic resonance weighted image of any individual is selected. Due to the model training features, the target magnetic resonance weighted image selected here may have a contrast type that has not been present in the training data set. Therefore, different types of magnetic resonance weighted data are thus selected as input to extract the hidden layer variable according to the practical application demands, and an example employed here is that first magnetic resonance weighted image data having a contrast type that has not been present in the training data set is taken as the input of the model. First, first magnetic resonance weighted image data having a contrast type that has not been present in the training data set is constructed, the appropriate synthesis parameters are selected, and the magnetic resonance signal synthesis formula I or the magnetic resonance signal synthesis formula II is selected, thereby synthesizing the first magnetic resonance weighted image data. This synthesized data is input into the encoder of the loaded conditional variational autoencoder model to output the mean and the variance in posterior normal distribution of the hidden layer variable, and sampling is performed by the sampling formula to obtain the hidden layer variable z.
The second magnetic resonance weighted image with the corresponding contrast is synthesized by using the trained decoder based on the extracted hidden layer variable and the magnetic resonance quantitative parametric image.
The trained conditional variational autoencoder model is loaded. An extracted hidden layer variable and a magnetic resonance quantitative parametric image of a certain individual are selected. The individual selected here determines the conditional variational autoencoder to output a second magnetic resonance weighted image of the individual.
Corresponding to the embodiment of the foregoing conditional variational autoencoder-based multi-contrast magnetic resonance weighted image synthesis method, the present application also provides an embodiment of a conditional variational autoencoder-based multi-contrast magnetic resonance weighted image synthesis device.
Referring to
The embodiment of the conditional variational autoencoder-based multi-contrast magnetic resonance weighted image synthesis device of the present application can be applied to any device with data processing capability, which can be a device or apparatus such as a computer. The device embodiment may be implemented in software, or in hardware or a combination of hardware and software. In an example that implementation is achieved by software, as a device in a logical sense, it is formed by reading corresponding computer program instructions in a non-volatile memory into a memory by a processor of any device with data processing capability. From the hardware level, as shown in
The implementation process of the functions and effects of the various units in the above device specifically refer to the implementation process of the corresponding steps in the above method, which is not repeated here.
Since the device embodiment substantially corresponds to the method embodiment, the relevant part can refer to the description of the method embodiment. The above described device embodiment is merely illustrative, wherein the units illustrated as separate components may be or may not be physically separated, and components shown as units may be or may not be physical units, i.e., may be located at one place, or may be distributed on a plurality of network units. Some or all of modules may be selected according to practical needs to achieve the objectives of the solutions of the present application. A person of ordinary skill in the art can understand and implement the embodiments without inventive step.
An embodiment of the present application also provide a computer-readable storage medium storing a program thereon, wherein the program, when executed by a processor, implements the variational autoencoder-based magnetic resonance weighted image synthesis method in the above embodiment.
The computer-readable storage medium may be an internal storage unit, such as a hard disk or a memory, of any device with data processing capability according to any of the foregoing embodiments. The computer-readable storage medium may also be an external storage device of any data device with data processing capability, such as a plug-in type hard disk, SmartMedia Card (SMC), SD card, Flash Card, or the like equipped on the device. Further, the computer-readable storage medium can also include both an internal storage unit and an external storage device of any device data processing capability. The computer-readable storage medium is configured to store the computer program and other programs and data required by any device with data processing capability, but may also be configured to temporarily store data that has been or will be output.
The above are merely preferred embodiments of the present application and are not intended to limit the present application, which may suffer from various modifications and variations for those skilled in the art. Any modification, equivalent replacement, improvement and the like in the spirit and principle of the present application are included in the scope of protection of the present application.
Number | Date | Country | Kind |
---|---|---|---|
202211375033.5 | Nov 2022 | CN | national |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/CN2023/080571 | Mar 2022 | US |
Child | 18219678 | US |