Sparse sampling techniques for magnetic resonance imaging (MRI) reconstruction has been improving over the years, yet work remains to be done. At least one earlier work by Zhao [17], incorporated by reference herein, discusses techniques for sparse sampling in the k-parametric domain, i.e., k-p space. Zhao proposed simultaneously utilizing low rank and sparse components of the k-space domain to allow reconstruction of images with a convex optimization problem across different parameters for MRI data collection. The convex optimization problem is reduced to a one dimensional problem in Zhao's configuration, but Zhao's method still relies upon estimating the parameter subspace, which can lead to errors.
In other advances, Guillemot, et al. [24], incorporated herein by reference, uses singular value decomposition (SVD) to take advantage of data sparsification with different constraints to make the calculation of projecting constraints onto a convex data set. With appropriate care to using sparse data, Guillemot's procedure can converge to an optimal solution with less data, and this kind of SVD algorithm may apply to MRI reconstruction. Lyra-Leite [25], incorporated by reference herein, explains how SVD calculations can apply to MRI, which is also described in regard to the strong magnetic field B0, gradient coils (i.e., Gx, Gy, Gz) that produce the gradient perturbation of B0 and frequency encode or phase encode the x, y, z spatial positions in any given scan image. A radio-frequency coil sends an excitation pulse to the imaging subject's body to yield net magnetization in an x-y plane for imaging in layers. The rotating magnetization generates an oscillating signal that can be detected. The frequency and phase of that oscillating signal can be detected for reconstructing the images of the energized region of interest. The resulting k-space of frequency or phase encoded data must be sampled and decomposed to give values to pixels or voxels of an image. Lyra-Leite [25] utilizes single value decomposition to reconstruct the images from less sample points within the encoded k-space maps.
These prior methods, however, need improvements in accuracy, signal to noise ratio, and overall quality of reconstruction.
In some embodiments, artificial intelligence and machine learning techniques are utilized to reconstruct MRI data into images that are useful to the naked eye. Machine Learning (ML) and Artificial Intelligence (AI) systems are in widespread use in customer service, marketing, and other industries. Machine learning is considered a subset of more general artificial intelligence operations, and AI endeavors may utilize numerous instances of machine learning to make decisions, predict outputs, and perform human-like intelligent operations. Machine learning protocols typically involve programming a model that instantiates an appropriate algorithm for a given computing environment and training the model on a particular data set or domain with known historical results. The results are generally known outputs of many combinations of parameter values that the algorithm accesses during training. The model uses numerous statistical and mathematical operations to learn how to make logical decisions and generate new outputs based on the historical training data. Machine learning (ML) includes, but is not limited to, a number of models such as neural networks, deep learning algorithms, support vector machines, data clustering, regression models, and Monte Carlo simulations. Other models may utilize linear regression, logistic regression, support vector machines, K-means clustering, classification models such as a binary classifier or a multi-class classifier, clustering models, anomaly detection, other supervised learning models, and even combinations of one or more machine language model types. Most of these take vectors of data as inputs.
The term “artificial intelligence,” therefore, includes any technique that enables one or more computing devices or comping systems (i.e., a machine) to mimic human intelligence. Artificial intelligence (AI) includes, but is not limited to, knowledge bases, machine learning, representation learning, and deep learning. The term “machine learning” is generally a subset of AI that enables a machine to acquire knowledge by extracting patterns from raw data.
The term “representation learning” may be used as a subset of machine learning that enables a machine to automatically discover representations needed for feature detection, prediction, or classification from raw data. Representation learning techniques include, but are not limited to, autoencoders.
The term “deep learning” may also be considered a subset of machine learning that enables a machine to automatically discover representations needed for feature detection, prediction, classification, etc. using layers of processing. Deep learning techniques include, but are not limited to, artificial neural network or multilayer perceptron (MLP).
Machine learning models include supervised, semi-supervised, and unsupervised learning models. In a supervised learning model, the model learns a function that maps an input (also known as feature or features) to an output (also known as target or target) during training with a labeled data set (or dataset). In an unsupervised learning model, the model learns a function that maps an input (also known as feature or features) to an output (also known as target or target) during training with an unlabeled data set. In a semi-supervised model, the model learns a function that maps an input (also known as feature or features) to an output (also known as target or target) during training with both labeled and unlabeled data.
Some machine learning models are designed for a specific data set or domain and are highly expert at handling the nuances within that narrow domain. It is with respect to these and other considerations that the various aspects of the present disclosure as described below are presented.
Other aspects and features according to the example embodiments of the disclosed technology will become apparent to those of ordinary skill in the art, upon reviewing the following detailed description in conjunction with the accompanying figures.
A computer implemented method of reconstructing magnetic resonance images (MRI) in Cartesian coordinates uses acquired magnetic resonance data and implements a Fourier transform to place the MRI data in k-space. The method allows for under-sampling the k-space and achieving an accurate output image by selecting an image model to map the sampled data and iteratively converge the model to an output that matches a region of interest subject to the MRI. The image model may be an alternating direction method of multipliers (ADMM) or an ADMM with non-convex low rank regularization algorithm. A de-noising algorithm may be at least one of a plug and play block matching and 3D filtering (PnP-BM3D), a plug and play weighted nuclear norm minimization (WNNM), or a plug and play denoising convolutional neural networks (PnP-DnCNN) algorithm. An iterative optimization of the variables of the model yields an output image.
Reference will now be made to the accompanying drawings, which are not necessarily drawn to scale. The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee.
In some aspects, the disclosed technology relates to systems, methods, and computer-readable medium for magnetic resonance based skull thermometry. Although example embodiments of the disclosed technology are explained in detail herein, it is to be understood that other embodiments are contemplated. Accordingly, it is not intended that the disclosed technology be limited in its scope to the details of construction and arrangement of components set forth in the following description or illustrated in the drawings. The disclosed technology is capable of other embodiments and of being practiced or carried out in various ways.
It must also be noted that, as used in the specification and the appended claims, the singular forms “a,” “an” and “the” include plural referents unless the context clearly dictates otherwise. Ranges may be expressed herein as from “about” or “approximately” one particular value and/or to “about” or “approximately” another particular value. When such a range is expressed, other exemplary embodiments include from the one particular value and/or to the other particular value.
By “comprising” or “containing” or “including” is meant that at least the named compound, element, particle, or method step is present in the composition or article or method, but does not exclude the presence of other compounds, materials, particles, method steps, even if the other such compounds, material, particles, method steps have the same function as what is named.
In describing example embodiments, terminology will be resorted to for the sake of clarity. It is intended that each term contemplates its broadest meaning as understood by those skilled in the art and includes all technical equivalents that operate in a similar manner to accomplish a similar purpose. It is also to be understood that the mention of one or more steps of a method does not preclude the presence of additional method steps or intervening method steps between those steps expressly identified. Steps of a method may be performed in a different order than those described herein without departing from the scope of the disclosed technology. Similarly, it is also to be understood that the mention of one or more components in a device or system does not preclude the presence of additional components or intervening components between those components expressly identified.
As discussed herein, a “subject” (or “patient”) may be any applicable human, animal, or other organism, living or dead, or other biological or molecular structure or chemical environment, and may relate to particular components of the subject, for instance specific organs, tissues, or fluids of a subject, may be in a particular location of the subject, referred to herein as an “area of interest” or a “region of interest.”
A detailed description of aspects of the disclosed technology, in accordance with various example embodiments, will now be provided with reference to the accompanying drawings. The drawings form a part hereof and show, by way of illustration, specific embodiments and examples. In referring to the drawings, like numerals represent like elements throughout the several figures.
Embodiments of the present disclosure include MRI-based thermometry techniques. In some embodiments of the present disclosure, the MRI-based thermometry technique is adapted to measure heating in the skull of a human patient during a focused ultrasound (FUS) treatment.
The area of interest A shown in the example embodiment of
It should be appreciated that any number and type of computer-based medical imaging systems or components, including various types of commercially available medical imaging systems and components, may be used to practice certain aspects of the present disclosure. Systems as described herein with respect to imaging are not intended to be specifically limited to the particular system shown in
One or more data acquisition or data collection steps as described herein in accordance with one or more embodiments may include acquiring, collecting, receiving, or otherwise obtaining data such as imaging data corresponding to an area of interest. By way of example, data acquisition or collection may include acquiring data via a data acquisition device, receiving data from an on-site or off-site data acquisition device or from another data collection, storage, or processing device. Similarly, data acquisition or data collection devices of a system in accordance with one or more embodiments of the present disclosure may include any device configured to acquire, collect, or otherwise obtain data, or to receive data from a data acquisition device within the system, an independent data acquisition device located on-site or off-site, or another data collection, storage, or processing device.
In
The device 225 can be configured to apply localized energy to heat a targeted region within the area of interest A which includes tissues of or near the brain. As a result, heating may occur in bone tissues, such as that of the skull. The MRI components of the system (including MRI electronics 210) are configured to work within a larger MRI system to acquire magnetic resonance data and for reconstructing images of all or regions of the area of interest as well as temperature-related data. The temperature data may include a temperature at a targeted region and/or a temperature at a reference region. The temperature data may be used to monitor the effectiveness and safety of the thermal therapy treatment and adjust treatment settings accordingly.
The targeted region may include bone tissue, which as described above, has a short T2/T2*. Control of the application of the focused energy via the controller 212 may be managed by an operator using an operator console (e.g., user computer). The controller 212 (which, as shown is also coupled to MRI electronics 210) may also be configured to manage functions for the application and/or receiving of MR signals. For example, the controller 212 may be coupled to a control sequencer such as the control sequencer 152 shown in
Although the FUS device 225 shown in the embodiment of
As shown, the computer 300 includes a processing unit 302 (“CPU”), a system memory 304, and a system bus 306 that couples the memory 304 to the CPU 302. The computer 300 further includes a mass storage device 312 for storing program modules 314. The program modules 314 may be operable to perform functions associated with one or more embodiments described herein. For example, when executed, the program modules can cause one or more medical imaging devices, localized energy producing devices, and/or computers to perform functions described herein for implementing the pulse sequence shown in
The mass storage device 312 is connected to the CPU 302 through a mass storage controller (not shown) connected to the bus 306. The mass storage device 312 and its associated computer-storage media provide non-volatile storage for the computer 300. Although the description of computer-storage media contained herein refers to a mass storage device, such as a hard disk, it should be appreciated by those skilled in the art that computer-storage media can be any available computer storage media that can be accessed by the computer 300.
By way of example and not limitation, computer storage media (also referred to herein as “computer-readable storage medium” or “computer-readable storage media”) may include volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-storage instructions, data structures, program modules, or other data. For example, computer storage media includes, but is not limited to, RAM, ROM, EPROM, EEPROM, flash memory or other solid state memory technology, CD-ROM, digital versatile disks (“DVD”), HD-DVD, BLU-RAY, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by the computer 300. “Computer storage media”, “computer-readable storage medium” or “computer-readable storage media” as described herein do not include transitory signals.
According to various embodiments, the computer 300 may operate in a networked environment using connections to other local or remote computers through a network 316 via a network interface unit 310 connected to the bus 306. The network interface unit 310 may facilitate connection of the computing device inputs and outputs to one or more suitable networks and/or connections such as a local area network (LAN), a wide area network (WAN), the Internet, a cellular network, a radio frequency (RF) network, a Bluetooth-enabled network, a Wi-Fi enabled network, a satellite-based network, or other wired and/or wireless networks for communication with external devices and/or systems.
The computer 300 may also include an input/output controller 308 for receiving and processing input from any of a number of input devices. Input devices may include one or more of keyboards, mice, stylus, touchscreens, microphones, audio capturing devices, and image/video capturing devices. An end user may utilize the input devices to interact with a user interface, for example a graphical user interface, for managing various functions performed by the computer 300. The input/output controller 308 may be configured to manage output to one or more display devices for displaying visually representations of data, such as display monitors/screens that are integral with other components of the computer 300 or are remote displays.
The bus 306 may enable the processing unit 302 to read code and/or data to/from the mass storage device 312 or other computer-storage media. The computer-storage media may represent apparatus in the form of storage elements that are implemented using any suitable technology, including but not limited to semiconductors, magnetic materials, optics, or the like. The computer-storage media may represent memory components, whether characterized as RAM, ROM, flash, or other types of technology. The computer storage media may also represent secondary storage, whether implemented as hard drives or otherwise. Hard drive implementations may be characterized as solid state, or may include rotating media storing magnetically-encoded information. The program modules 314, which include the imaging application 318, may include instructions that, when loaded into the processing unit 302 and executed, cause the computer 300 to provide functions associated with one or more embodiments illustrated in the figures. The program modules 314 may also provide various tools or techniques by which the computer 300 may participate within the overall systems or operating environments using the components, flows, and data structures discussed throughout this description.
Accelerating MRI acquisition is always in high demand, because long scan times could increase the potential risk of image degradation caused by patient motion. Generally, MRI reconstruction with higher under-sampling rates requires regularization terms, such as wavelet transformation and total variation transformation. This disclosure investigates employing the plug and play (PnP) alternating direction method of multipliers (ADMM)) framework to reconstruct highly under-sampled MRI k-space data with three different denoiser algorithms: block matching and 3D filtering (BM3D), weighted nuclear norm minimization (WNNM) and residual learning of deep learning, de-noising convolutional neural networks CNN (DnCNN). The results show that these three PnP-based methods outperform current regularization methods.
One challenge of fast MRI is to recover the original image from under-sampled k-space data. Prior technologies, such as SENSE [1] exploits the knowledge of sensitivity maps, and GRAPPA [2] uses the learned weighted-coefficients from autocalibration signal (ACS) lines to estimate the missing k-space lines. Compressed sensing (CS) [3] uses the idea that data could be compressed if under-sampled artifacts are incoherent. Therefore, it introduces the concept of sparsity, achieved by regularization terms. L1-ESPIRIT [4] also includes regularization terms in soft-SENSE reconstruction to iteratively find the optimal solution. After the PnP prior [5] was first proposed by Venkatakrishnan et al., there have been several studies applying this concept to MRI [6, 7]. Most of these studies focus on the convolutional neural network, CNN, algorithm to complete the denoising process of the PnP algorithm. Alternatively, DnCNN [8] may be a better fit. In non-limiting embodiments, this work explores three advanced denoiser algorithms to reconstruct four-fold under-sampled MRI data using the PnP-ADMM framework. The idea behind BM3D [9] is that given a local patch, it is not difficult to find many similar patches from nearby. These patches help with denoising, and this idea is typically true for medical images. The human brain, for example, has white matter, gray matter and cerebrospinal fluid, and thus has large non-local self-similarity in this sense. WNNM aims to improve conventional low rank algorithms by differently weighting singular values in nuclear norm, compared to the general solution which treats singular values equally in order to meet the convex property. Instead of directly outputting a denoised image, DnCNN learns a residual image, and this residual learning and batch normalization could benefit from each other, further improving the denoising performance. This neural network is more natural to combine with the PnP framework.
For MRI reconstruction, the collected signal can be written as Formula (1) of
For purposes of this disclosure, A is replaced by a sensitivity map weighted operator and a Fourier transform. The PnP-ADMM framework decouples data fidelity and the prior term by splitting variable x into new variables x, v and u, and its augmented Lagrangian for MRI reconstruction is set forth as:
Next, this disclosure repeats the steps set forth in
Here,
Notice that previous methods have used real and imaginary data as the input to the denoising algorithm. Here, this disclosure explored that using magnitude data and phase data leads to better performance in PnP algorithm. BM3D, WNNM and DnCNN algorithms are downloaded online [12, 13, 14]. The tested data is from NYU fast MRI dataset [11] and the displayed brain data and ESPIRIT reconstruction are from Lustig's ESPIRIT demonstration [15]. An under-sampled pattern is generated with variable density Poisson distribution. Image qualities were evaluated with two indexes: peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM), which are defined as follows:
MR reconstruction with advanced denoising algorithms under the PnP-ADMM framework is a flexible approach for image reconstruction. In this study, the approach outperformed conventional regularization methods in MRI at acceleration rate R=4.
In another non-limiting embodiment, a nonconvex low rank regularization (NLR 900) was proposed to accelerate parameter mapping in the k-p domain. The NLR uses weighted nuclear norm minimization (WNNM 925) to obtain an optimized solution by differently penalizing singular values, in comparison to traditional low rank methods. The performance of the proposed algorithm was demonstrated for T2 mapping of the kidney in non-limiting examples. This disclosure demonstrated that the proposed algorithm outperformed k-p domain based compressed sensing and L&S algorithms.
Parameter mapping plays an important clinical role in improving the characterization of pathologies. However, multiple data acquisitions with different imaging parameters for T1/T2 mapping take considerable time. Longer scan times can cause drawbacks such as high cost, patient discomfort, and risk of bulk motion. To accelerate parameter mapping, under-sampling can be performed in the ky−p domain; however, this renders the image reconstruction an ill-posed inverse problem. Many methods [1-6] have been proposed to accelerate T1/T2/T2* mapping. Sparsity and low rank [1-3] prior information are commonly used. In addition, deep learning methods have recently been proposed to learn prior information for T2 mapping [6]. Compared to the traditional methods, deep learning methods require long training time and are highly dependent on specific acquisition settings. One of the goals of this study is to make better use of the low rank prior information to improve the estimation of parameter maps at highly under-sampled rates.
For reconstruction-based methods, multi-contrast images are usually first reconstructed from under-sampled k-space data and then the T1/T2/T2* mappings are estimated from exponential fitting. The two typical reconstruction-based methods are compressed sensing [1] (CS), which exploits the sparsity of the k-p domain, and low rank and sparse [2] (L&S), which utilizes both low rank and sparsity prior information to better facilitate T1/T2/T2* mapping. However, traditional low rank methods treat each singular value equally during the SVD operation to pursue a convex objective function, limiting its performance. Nonconvex regularized low rank is based on the fact that singular values of SVD 940 in low rank methods should be treated differently according to prior information namely, large singular values should be shrunk less to preserve the major data components. Therefore, a nonconvex low rank constraint for reconstruction-based parameter mapping can be written as Formula (10) in
According to WNNM [22], the globally optimal solution of the nonconvex low-rank regularized objective function will be reached if the weights are in a non-ascending order, and the ith weighted singular value λw,i calculated by the generalized soft-thresholding operator set forth in Formula (13) in
In one non-limiting embodiment, the non-local low-rank regularization (NLR) of this disclosure was tested on T2w/T2*w k-space data of the kidney [8]. There were seven/ten echo times (TEs) in this data: (n1×10) ms for T2w data and (1.43+n2×2.14) ms for T2*w data, with a matrix size of 169×215. Retrospective under-sampling was performed in Ky−p domain (under-sampling rate=3,4,5,6) using a 2D variable density random pattern with four ky lines per echo fully sampled in the center of k-space.
The NLR method (and related system) is proposed for accelerating parameter mapping in k-p domain, and it has better performance than CS and L&S, as demonstrated by the lowest NRMSE and highest SSIM of T2w/T2*w images at different acceleration rates and the strongest positive correlation between the estimated T2/T2*w maps and reference T2/T2* maps in statistical analysis. These initial results demonstrate that weighting singular values differently is a promising method for accelerated T2/T2* mapping.
In non-limiting examples, a nonconvex low rank regularization (NLR) was proposed to accelerate parameter mapping in the k-p domain. The NLR uses weighted nuclear norm minimization (WNNM) to obtain an optimized solution by differently penalizing singular values, in comparison to traditional low rank methods. The performance of the proposed algorithm was demonstrated for T2 mapping of the kidney. One non-limiting study demonstrated that the proposed algorithm outperformed k-p domain-based compressed sensing and L&S algorithms.
A nonconvex low rank regularization (NLR) was proposed to accelerate parameter mapping in the k-p domain. The NLR uses weighted nuclear norm minimization (WNNM) to obtain an optimized solution by differently penalizing singular values, in comparison to traditional low rank methods. The performance of the proposed algorithm was demonstrated for T2 mapping of the kidney. One non-limiting study demonstrated that the proposed algorithm outperformed k-p domain-based compressed sensing and L&S algorithms.
In one embodiment, a computer implemented method 400 of reconstructing magnetic resonance images (MRI) in Cartesian coordinates uses acquired magnetic resonance data 405 and implements a Fourier transform 410 to place the MRI data in k-space. In some examples, the method may apply a sensitivity map weighted operator to the k-space data 415. The method allows for under-sampling the k-space 420 and achieving an accurate output image by selecting an image model to map the sampled data and iteratively converge variables in the model (i.e., optimize the variables 425) to an output image 435 that matches a region of interest subject to the MRI. The image model may be an alternating direction method of multipliers (ADMM) or an ADMM with non-convex low rank regularization algorithm. The method may include utilizing a de-noising algorithm 430 which may be at least one of a plug and play block matching and 3D filtering (PnP-BM3D), a plug and play weighted nuclear norm minimization (WNNM), or a plug and play denoising convolutional neural networks (PnP-DnCNN) algorithm. Machine learning allows for an iterative optimization of the variables of the model and yields an output image.
In one non-limiting embodiment, a computer implemented method of reconstructing a magnetic resonance image in Cartesian space, with a computer having a processor, computer memory, and software configured to implement image processing functions, includes acquiring magnetic resonance image (MRI) data for a region of interest of a subject 200; calculating a Fourier transform of the MRI data and saving Fourier transform data in the memory 505; applying a sensitivity map weighted operator to the Fourier transform data 510; modeling an expected image from the Fourier transform data according to variables to be estimated (515, 520, 525). In some non-limiting embodiments, the variables to be estimated are three variables that are present in a Lagrangian model of the Fourier transform data. An iterative process for reconstructing the magnetic resonance image in Cartesian coordinates from the Fourier transform data includes converging the variables to respectively selected estimates with an alternating direction method of multipliers (ADMM) procedure 905; and saving an output image in Cartesian coordinates after estimating the three variables for respective portions of the Fourier transform data 950. Each section, portion, patch, voxel or pixel within the Fourier transform data, can be modeled according to variables of a model. Without limiting this disclosure to any one model, a Lagrangian model utilizes three variables, i.e., the above noted x, v and u, to be estimated according to the resolution that the user desires.
In another embodiment, the computer implemented method includes applying at least one de-noising algorithm 600 to Fourier transform data when converging the three variables to a selected estimate. The de-noising algorithm may be at least one of a plug and play block matching and 3D filtering (PnP-BM3D 620), a plug and play weighted nuclear norm minimization (WNNM 625), or a plug and play denoising convolutional neural networks (PnP-DnCNN 630) algorithm. The Fourier transform data may include magnitude data and phase data for the MRI data.
Estimating the variables, such as the above noted three Lagrangian variables, may be an iterative process 500 of calculating estimates of the MRI data across a sampled portion 501 of the Fourier transform data 550, applying a de-noising algorithm 600, and calculating another estimate of the sampled portion of the expected image. In the example Lagrangian model, estimating the three variables includes iteratively cycling through machine learning algorithms to converge the ADMM procedure to a solution. As shown in
In another embodiment, a computer implemented method of reconstructing a magnetic resonance image in Cartesian space utilizes a computer having a processor, computer memory, and software configured to implement image processing functions. The method includes acquiring magnetic resonance image (MRI) data for a region of interest of a subject; calculating a Fourier transform of the MRI data and saving Fourier transform data in the memory; applying a sensitivity map weighted operator to the Fourier transform data; modeling an expected image according to variables of a non-convex low rank regularization algorithm to be estimated for reconstructing the magnetic resonance image in Cartesian coordinates from the Fourier transform data; estimating the variables for respective portions of the Fourier transform data with an alternating direction method of multipliers (ADMM) algorithm; and saving an output image in Cartesian coordinates after optimizing the three variables. In this non-limiting embodiment, the method may further utilize a weighted nuclear norm minimization (WNNM) process to converge estimates of the variables within the chosen model. Utilizing a weighted nuclear norm minimization (WNNM) process enhances singular values within the Fourier transform data that exceed a threshold for being a large value. Utilizing a weighted nuclear norm minimization (WNNM) process penalizes other singular values within the Fourier transform data that are lower than a threshold for being a small value. The non-convex model allows for under-sampling the Fourier transform data in a k-parameter space (K-p) and then estimating the values necessary to complete a Cartesian reconstruction of the MRI with less data processing. The method utilizes an iterative loop as shown in
These and other aspects of the disclosure are further set forth in the claims and the figures herein.
The following patents, applications and publications as listed below and throughout this document are hereby incorporated by reference in their entirety herein, and which are not admitted to be prior art with respect to the present invention by inclusion in this section.
The specific configurations, choice of materials and the size and shape of various elements can be varied according to particular design specifications or constraints requiring a system or method constructed according to the principles of the disclosed technology. Such changes are intended to be embraced within the scope of the disclosed technology. The presently disclosed embodiments, therefore, are considered in all respects to be illustrative and not restrictive. The patentable scope of certain embodiments of the disclosed technology is indicated by the appended claims, rather than the foregoing description.
This application claims priority to and the benefit of U.S. provisional patent application No. 63/460,684, filed on Apr. 20, 2023, and titled Method and System for Applying Advanced Denoisers to Enhance Highly Under-Sampled MRI Reconstruction Under Plug-and-Play ADMM Framework the disclosure of which is hereby incorporated by reference herein in its entirety. This application also claims priority to and the benefit of U.S. provisional patent application No. 63/467,259, filed on May 17, 2023, and titled System and Method for Accelerated Parameter Mapping in the k-p Domain Via Nonconvex Low Rank Constraint the disclosure of which is hereby incorporated by reference herein in its entirety.
This invention was made with government support under grant number EB028773 awarded by the National Institutes of Health. The government has certain rights in the invention.
Number | Date | Country | |
---|---|---|---|
63467259 | May 2023 | US | |
63460684 | Apr 2023 | US |