Conventional MRI does not directly visualize specific thalamus, basal ganglia and brainstem structures targeted by functional neurosurgery. Thus, it can be beneficial to provide exemplary systems, methods and computer-accessible medium that can overcome at least some of these limitations.
To that end, it is possible to provide exemplary systems, methods and computer-accessible medium according to exemplary embodiments of the present disclosure, which can use convolutional neural network (CNN) denoising of Fast Gray Matter Acquisition T1 Inversion Recovery (FGATIR) MRI with power spectrum regularization in order to directly visualize subcortical anatomy
Exemplary embodiments of the present disclosure can include exemplary systems, methods and computer-accessible medium which can be configured to employ three dimensional (3D) FGATIR which can utilize a short inversion time to suppress white matter signal and provide unparalleled direct visualization of brainstem and deep gray matter structures. FGATIR though has low signal-to-noise ratio such that a clinically useful high-resolution dataset requires 42-56-minute acquisitions. Such exemplary systems, methods and computer-accessible medium can be used to evaluate several denoising methods and convolutional neural network architectures to optimize the clinical feasibility of FGATIR. The exemplary systems, methods and computer-accessible medium acquired a large training dataset by scanning, e.g., 12 individuals eight times each to generate high SNR averages, to perform supervised learning and unbiased evaluation of denoising performance on real data. The selected exemplary CNN architecture can use a feed-forward residual learning to learn the optimal noise field. Embodiments may compute the mean-squared error loss between an estimated residual from a noisy input image and the predicted residual, regularized by a penalty on the residual power spectrum to minimize over-smoothing. Embodiments may evaluate the results of training both on simulated additive Gaussian noise, Rician noise, and on true noise from the MR system. The exemplary systems, methods and computer-accessible medium can also evaluate the efficacy of denoising the complex valued raw MRI data. The exemplary systems, methods and computer-accessible medium have been used to observe an increase in pSNR from 30 to 42 (30.6%) using a single-average FGATIR acquisition (14 min scan time). This was similar or equivalent to acquiring four averages using a conventional dataset (56 min scan time). Image contrast and quality were evaluated by board certified neuroradiologist and neurosurgeon. Images were considered sufficient for visualization of the structures relevant to functional neurosurgery applications, including MR-guided focused ultrasound of the VIM.
In some exemplary aspects of the exemplary embodiments of the present disclosure, the exemplary procedures described herein relate to method, system and/or a non-transitory computer-accessible medium having stored thereon computer-executable instructions for creating a direct visualization of subcortical anatomy using which a raw magnetic resonance image (MRI) can be received, and a power regularization convolutional neural network can be applied to the raw MRI. For example, the MRI can be Fast Gray Matter Acquisition T1 Inversion Recovery (FGATIR). The FGATIR can be acquired in an accelerated time window decreasing total scan time a reduction by, for example, a factor of √{square root over (2)} to 2, depending on the number of averages originally acquired, by eliminating the need for multiple acquired averages. The power regularization CNN can be tuned to provide an amount of regularization. The amount of regularization can be directly related to an amount of denoising performed on the MRI.
In addition or alternatively, there can be an inverse relationship between the amount of regularization and the amount of denoising. The amount of regularization can be selected to (a) prevent the power regularization CNN from minimizing a mean squared error for the MRI, and/or (b) not prevent any denoising by the power regularization CNN. The amount of regularization can be within a range of 0-5. The regularization amount of 5 can be the image with Poisson distributed noise and a regularization amount of 0 is a minimized mean-squared error loss with no regularization applied.
According to exemplary embodiments of the present disclosure, method, system and/or a non-transitory computer-accessible medium having stored thereon computer-executable instructions can be provided, whereas the power regularization CNN can further include a feed-forward residual learning architecture configured to, e.g., determine a mean-squared error loss between an estimated residual from a noisy input image and the predicted residual, and apply a penalty on the residual power spectrum to minimize over-smoothing. The power regularization CNN can target a normalized power spectrum energy level of 1 for all frequencies ranging from 0 Hz to 150 kHz. The output of applying the power regularization CNN to the MRI can be (i) a sharp and denoised MR, and/or a denoised MRI with a residual that has unit energy at all frequencies between 0 Hz to 150 kHz. In additional exemplary embodiments of the present disclosure, methods, systems and/or a non-transitory computer-accessible medium having stored thereon computer-executable instructions can be provided for creating a direct visualization of subcortical anatomy in which, e.g., a fast gray matter acquisition T1 inversion recovery (FGATIR) magnetic resonance image (MRI) can be received, and a power regularization convolutional neural network can be applied to the FGATIR MRI.
According to certain variants of the exemplary embodiments of the present disclosure, the power regularization convolutional neural network can be trained on (i) an FGATIR training data set including a plurality of FGATIR MRI images, (ii) a single known noise level, and/or (iii) a plurality of noise levels.
For example, the plurality of FGATIR MRI images of the FGATIR training data set can be augmented by, e.g., randomly transposing each FGATIR MRI, and/or supplementing with additive white Gaussian noise and/or Rician distributed noise. Alternatively or in addition, each of the plurality of FGATIR MRI images of the FGATIR can be created from, e.g., eight independent averages reconstructed to image space spatially co-registering using a 6 degrees-of-freedom rigid-body transform and averaged together.
According to further exemplary embodiments of the present disclosure, methods, systems and/or a non-transitory computer-accessible medium having stored thereon computer-executable instructions can be provided for a direct visualization of subcortical anatomy by, e.g., receiving a fast gray matter acquisition T1 inversion recovery (FGATIR) magnetic resonance image (MRI), and applying a convolutional neural network to the FGATIR MRI. The convolutional neural network can be a power spectrum convolutional neural network. The convolutional neural network can be trained on (i) an FGATIR training data set including a plurality of FGATIR MRI images, (ii) a single known noise level. and/or (iii) a plurality of noise levels.
According to yet further exemplary embodiments of the present disclosure, methods, systems and/or a non-transitory computer-accessible medium having stored thereon computer-executable instructions can be provided, in which the plurality of FGATIR MRI images of the FGATIR training data set can be augmented by, e.g., randomly transposing each FGATIR MRI, and supplementing with additive white Gaussian noise and/or Rician distributed noise.
For example, each of the plurality of FGATIR MRI images of the FGATIR is created from eight independent averages reconstructed to image space spatially co-registering using a 6 degrees-of-freedom rigid-body transform and averaged together.
These and other objects, features and advantages of the exemplary embodiments of the present disclosure will become apparent upon reading the following detailed description of the exemplary embodiments of the present disclosure, when taken in conjunction with the accompanying claims.
Further objects, features and advantages of the present disclosure will become apparent from the following detailed description taken in conjunction with the accompanying Figures showing illustrative embodiments of the present disclosure, in which:
Throughout the drawings, the same reference numerals and characters, unless otherwise stated, are used to denote like features, elements, components or portions of the illustrated embodiments. Moreover, while the present disclosure will now be described in detail with reference to the figures, it is done so in connection with the illustrative embodiments and is not limited by the particular embodiments illustrated in the figures and the appended claims.
The following description of exemplary embodiments provides non-limiting representative examples referencing numerals to particularly describe features and teachings of different aspects of the present disclosure. The exemplary embodiments described should be recognized as capable of implementation separately, or in combination, with other exemplary embodiments from the description of the exemplary embodiments. A person of ordinary skill in the art reviewing the description of the exemplary embodiments should be able to learn and understand the different described aspects of the present disclosure. The description of the exemplary embodiments should facilitate understanding of the exemplary embodiments of the present disclosure to such an extent that other implementations, not specifically covered but within the knowledge of a person of skill in the art having read the description of exemplary embodiments, would be understood to be consistent with an application of the exemplary embodiments of the present disclosure.
Methods, systems and/or a non-transitory computer-accessible medium according to various exemplary embodiments of the present disclosure can be provided to create and/or utilize a convolutional neural network (CNN) that would improve expert-perceived image quality from clinically-feasible FGATIR image acquisitions. Such exemplary methods, systems and/or a non-transitory computer-accessible medium can obtain index standard, high signal-to-noise FGATIR data from volunteers (8 signal averages requiring ˜2 hrs. of scanning using 3T MRI). Such exemplary data can be used to train a 2-channel CNN to denoise single-average FGATIR images (e.g., 12 min acquisition time) using novel power spectrum (PS) regularization to reduce over-smoothing. Using such exemplary methods, systems and/or a non-transitory computer-accessible medium, it is possible to evaluate optimal power spectrum regularization both quantitatively and via rater assessment and then compare the best performing PS-regularized CNN to alternative state-of-the-art denoising methods using both quantitative analysis and rater assessment. This exemplary comparison can be based on models derived from training with a single known noise level (i.e. our original source MRI data) or simulated randomly distributed input image noise levels.
Using exemplary methods, systems and/or a non-transitory computer-accessible medium according to exemplary embodiments of the present disclosure, a HIPPA-compliant, IRB-approved can be utilized, that can acquire high-quality, index standard FGATIR data by acquiring 8 averages from 12 individuals without neurological disease (mean age 31.4+/−4.3, 8 male). FGATIR sequence parameters can include TR/TE/TI=3000/2.11/410 ms, nonselective 180-degree inversion pulse, FA=6{circumflex over ( )}∘, 288×288 matrix, 230-mm square field-of-view, 192 0.8-mm sagittal slices, bandwidth=455 Hz/pixel, time=12 min 9 sec per average. Subjects can be scanned in 2 sessions (4 individual averages each session) separated by a 15 minute break on the same day.
In one example, for each subject, the 8 averages can be obtained independently, reconstructed to image space, spatially co-registering using a 6 degrees-of-freedom rigid-body transform with FSL-FLIRT (https://fsl.fmrib.ox.ac.uk/), then averaged together. While there is a signal-to-noise penalty for not obtaining multiple averages in k-space prior to Fourier image transform, this approach (e.g., 8 separate 12 min 9 sec scans versus a continuous scan time of 97 min 12 sec) may avoid image degradation from subtle head motion even in cooperative, tolerant and experienced volunteers, and provide individual averages from the same individual for the CNN training (see below). Clean images can be transformed back into each original coordinate space resulting in 96 total high-resolution 3D noisy/clean training pairs (12 subjects, 8 single average images each paired with their respective 8-average dataset). Complex-valued raw data also may also be retained to evaluate the performance of denoising using an even lower noise regime and the effect of denoising magnitude and phase images separately.
Data augmentation can be used to better generalize the learned network.
As illustrated in
Given training set D=[yi, xi]i=1 . . . n where yi, xi denote the ith training pair of noisy and clean images, n represents the number of training images, the goal can be to train a parametric approximation to the posterior of the latent noise field in the input data. For noisy image y, its training pair x can be a simulated clean image obtained by registering and averaging 8 consecutively acquired FGATIR datasets, thus it is not necessarily the exact latent image representation. For this reason, embodiments may also include “clean” data with simulated noise in the training set. Exemplary embodiments may evaluate how noise-level specific and blind noise models perform at reducing Gaussian or Rician distributed noise in FGATIR MRI data.
Based on results observed in FFDNet (see, e.g., Zhang, Zuo et al. 2018) embodiments may take a tunable noise level map M as a second input channel to make the denoising model flexible to varying noise levels. Embodiments may also use bias-free batch normalization layers rather than traditional batch-norms—Mohan and colleagues demonstrated that removing bias terms in batch-norm layers can improve a network's generalizability to noise without affecting outcome image quality (Mohan, Kadkhodaie et al. 2019).
The exemplary architecture according to the exemplary embodiments of the present disclosure can include, e.g., a discriminative feed-forward convolutional network with two input channels: 1) a patch of noisy data, and 2) the pixel-wise noise-map. The noise map can be or include an image equal in size to the input dataset, where all elements can be set equal to the known added noise level at a given voxel. The first layer can perform convolution+a rectified linear unit (ReLU) to generate 64 (3×3×3) feature maps, followed by 20 layers of convolution (conv)+bias-free batch normalization (Mohan, Kadkhodaie et al. 2019)+ReLU, all with filters of size 3×3×3×64. The additional layer can include another 3×3×3 conv+ReLU with 64 input channel and 1 output channel. Zero-padding can be employed to keep the size of feature maps unchanged after each convolution. Embodiments may adopt a residual network formulation that eases training and delivers better performance (Zhang, Zuo et al. 2018). Using the exemplary embodiments of the present disclosure, denoising can be performed equally well without the use of a residual formulation by increasing the model complexity, however residual learning can be better suited for power spectrum-based regularization, since power spectra are computed directly on residuals (see below). During an exemplary evaluation, the systems, methods and computer-accessible medium according to the exemplary embodiments of the present disclosure can estimate a noise map using the standard deviation of background MRI signal.
The choice of the regularizer may have an important effect on the quality of the restored image. Equally important can be the ability to efficiently compute the minimum of the overall objective function. Classic residual image denoising under AWG noise, amounts to a loss of the form:
Here N refers to the total number of training samples, and i indexes training samples. Training using Eq. 1 can minimize the mean squared error (MSE) between a predicted residual R (defined as the difference between denoised and non-denoised images), and the true noise map ∈i, and θ denotes the training parameters of the network. Since this network is fully convolutional, it inherits the local connectivity property that the output pixel can be determined by the local noisy input and local noise level. Hence, the trained network naturally may handle spatially-variant noise by specifying a non-uniform noise level map, which is of particular importance to MRI data where signal to noise ratio (SNR) varies spatially over an image, based on both tissue MRI properties sample measured by the specific MRI sequence and coil sensitivity.
To reduce the network from over-smoothing, the systems, methods and computer-accessible medium according to the exemplary embodiments of the present disclosure can utilize a regularization penalty on the loss function. For this over-smoothing penalty, the systems, methods and computer-accessible medium according to the exemplary embodiments of the present disclosure can utilize the power spectrum to encourage the output noise map to be minimally correlated. For independently distributed noise, the systems, methods and computer-accessible medium according to the exemplary embodiments of the present disclosure can expect that a normalized residual (a residual normalized by the local noise level a) will have a power spectrum of 1 at all frequencies. The power spectrum may take the form:
Here F denotes the Fourier transform over image dimensions, e.g., in 2 dimensions (performed slice by slice over a 3D patch), or in 3 dimensions, and A is the total number of voxels in Fourier transform (e.g., the cross-sectional area of the input 2-dimensional patch, or a 3d volume for a 3-dimensional transform). The complete unconstrained optimization problem can be formulated as:
The MSE (mean squared error) term can be normalized by σ2 so that the expression is dimensionless, and the two terms scale similarly with image size. Power spectrum regularization can be difficult to train and prone to exploding gradients. Embodiments may apply a filter with Gaussian weights to power spectrum maps to improve training performance and better allow the optimizer to converge to a minimum for both MSE and PS terms.
The exemplary architecture can be trained under several conditions. To measure how regularization changes results compared to MSE loss alone, the network was trained without regularization at single noise levels of AWGN for σ=0.05, 0.1, and 0.2. The same exemplary model was trained on blind noise in the range σ∈[0,0.2], and the regularized PScnn was trained for blind noise in the same regime. To improve the generalizability or external validity of the models the CNN produced (e.g. different SNR than our original data because of different scanner, coil or image resolution), the systems, methods and computer-accessible medium according to the exemplary embodiments of the present disclosure also created data with random spatial noise by combining the 8-average FGATIR data with different amounts of simulated noise. This simulated noise was generated either with a single real channel, or as the magnitude of a real and imaginary channel.
The exemplary network can be optimized using Adaptive Moment Estimation (ADAM) (Kingma and Ba 2014). The network can be trained for a total of 20 epochs and the learning rate can be set to 10−3 and decayed to 10−4 after 10 epochs. Minibatch size can be set to 64 training examples. All models can be trained using pytorch on a Tesla V100 GPU (Nvidia; Santa Clara, Ca.). Training time for each model may take approximately 10 hours.
The degree of regularization (A) can determine how much denoising is performed by the network. It can be important to choose a value of λ that is large enough that the network learns to stop minimizing MSE before it begins to oversmooth, but not so large that the network does not perform any denoising at all. Exemplary embodiments generated CNN models using λ=0, 0.5, 0.75, 1.0, 1.25, 1.5, 2.0, 3.0, and 5.0 with single noise value (σ=0.04), then computed residual power spectra, peak SNR (pSNR), Structural Similarity Index Metric (SSIM), MSE and S3 sharpness for each value. S3 sharpness (see, e.g., Vu and Chandler 2009) is a reference-free imaging metric commonly used to assess how sharp an image is (or its inverse, blurring). In this framework λ=0 is an unregularized network trained on MSE loss alone, which can be equivalent to using a denoising convolutional neural network (DnCNN) architecture trained on FGATIR images. Exemplary embodiments therefore may refer to λ=0 models simply as DnCNN.
Based on the quantitative results of this broad range of regularization, the systems, methods and computer-accessible medium according to the exemplary embodiments of the present disclosure then ascertained raters to evaluate an axial FGATIR denoised image of the midbrain at the level of the red nucleus from an individual subject generated using λ=0, 1.0, 1.5, 2.0, 3.0, and 5.0, along with models trained on blind noise for λ=1 and λ=0. Each panel can include a 2×4 arrangement of the above 8 images in random order and there were 12 total panels. Expert raters were two board-certified neuroradiologists familiar with FGATIR contrast, subcortical anatomy, and more than 5 years clinical practice. The two raters were blinded to the source of images and evaluated the images independently from each other. For each validation subjection, each rater ranked the images from best to worst. Overall scores were computed using, e.g.:
where Cj represents the raters ranking (5 being the best quality, 1 worst quality). The cumulatively highest-rated level of regularization over all subjects and raters was chosen to be compared with other denoisers (see next section). Inter-rater variability was measured using the intra-class correlation coefficient (ICC) measuring the degree of consistency among different paired measurements. Pooled rankings from both raters for each training method were compared using non-parametric Kruskal Wallis tests, followed by post-hoc Dunn's test in order to confirm which methods demonstrated significantly improved rankings compared to the rest.
Exemplary Comparison of Power Spectrum CNN (PScnn) with Other Denoisers
Based on the exemplary results described herein, where raters ranked the best performing degree of regularization, embodiments compared denoising convolutional neural networks DnCNN-S (λ=0), DnCNN-B (λ=0 with noise blind training), PScnn-S (λ=1.5), PScnn-B (λ=1.5 with noise blind training), bm4d (Maggioni, Katkovnik et al. 2012), and Gaussian smoothing (smoothing σ=1) evaluated at three image noise levels. These data came from the validation FGATIR dataset, such that these images were not observed by the neural network during training. To validate that the exemplary the systems, methods and computer-accessible medium according to the exemplary embodiments of the present disclosure can operate as expected, for each subject, image slice, and denoising method, the systems, methods and computer-accessible medium according to the exemplary embodiments of the present disclosure determined the power spectra of residuals and measured their deviation from 1 at all frequencies (since power spectra can be used to measure the degree of smoothing introduced by a denoiser). The systems, methods and computer-accessible medium according to the exemplary embodiments of the present disclosure also measured the pSNR, SSIM, and S3 image sharpness for each denoised validation image.
Axial FGATIR images of the midbrain at the level of the red nucleus denoised by these methods were rated by the same 2 experts in blinded, independent manner. Raters observed a total of 14 images per subject (168 images total) in two batches 6 weeks apart (12 images were repeated to assess intra-rater agreement). For each sample image, raters assessed contrast resolution, signal homogeneity, artificiality and overall quality. Contrast resolution was on a 1-4 scale of how easily raters could distinguish adjacent features (i.e. substantia nigra from cerebral peduncle, central tegmental tract from red nucleus and medial lemniscus from surrounding tissue). Signal homogeneity was a measurement (1-4 scale) of the degree of voxel-to-voxel signal variability raters could detect in regions that should be homogeneous. Artificiality was whether the raters felt an image looked computer-generated, smoother or altered in some deviation from typical MRI. Overall clinical quality was the raters' overall assessment of the performance of each technique. For rating scales, higher scores reflected better quality for each scale. Inter-rater reliability was measured using a one-to-one ICC for absolute agreement. A nonparametric Friedman's test was used to detect statistical differences in ratings over all four factors (rating categories). A post-hoc Conover test along with family-wise error correction was used to assess individual significance for each category.
The systems, methods and computer-accessible medium according to the exemplary embodiments of the present disclosure reveal that the performance of the MSE loss regularized by the residual power spectrum may depend heavily on the regularization parameter A.
The systems, methods and computer-accessible medium according to the exemplary embodiments of the present disclosure can illustrate that the optimal level of regularization occurs when λ˜1.5.
Expert raters observed axial slices through the red nucleus to further determine the optimal value of λ.
In summary,
Raters found that the A that provided the best image quality was λ=1.5, followed by λ=1, and λ=0. The images with the worst overall quality came from λ=5. Interestingly, raters gave middling scores to networks that were trained blind. This can be because blind networks tend to generalize to new noise levels through blurring, which is penalized by raters as loss of effective image resolution. The intraclass correlation coefficient over all measurements and all evaluated denoisers between the two raters was 0.54. Statistical comparisons between pooled ratings of different regularization levels identified mainly that a large degree of regularization had a negative impact (λ=1.5 performed significantly worse compared to other methods).
Exemplary Comparison of PScnn with Other Denoisers
To evaluate whether this network works as intended and generates residuals with optimal statistical properties, exemplary embodiments evaluated power spectrum curves for 6 denoisers and three noise levels. The optimal denoiser generates a denoised image with a residual that has unit energy at all frequencies. Low pass filters are known to have low energy at high frequencies due to smoothing over sharp edges. A power-spectrum with energy >1 indicates that the denoiser has added information to the image that was not present in the original (noisy) version.
Quantitative comparisons of pSNR, SSIM, and S3 sharpness across denoisers and at varying noise levels (see Table 1) demonstrated comparable performance for PScnn. PScnn showed high pSNR and SSIM scores without compromising image sharpness across all noise levels. DnCNN-S had the largest pSNR at noise levels 15 and 50, but had lower sharpness scores at the same noise levels. Bm4d notably had the higher sharpness scores compared to PScnn-S but performed worse in image quality measures. PScnn-B showed considerably higher S3 image sharpness measurements (S3>0.8) across all noise levels, however this was accompanied by the lowest image quality scores.
Owing to the inverse relationship between image sharpness and quantitative image quality measurements, embodiments further included an expert rater study to evaluate the qualitative features of each denoising method.
Both expert raters found that optimal overall image quality came from data that underwent four averages (average ratings of overall quality of 2.93 and 2.71 for each rater). PScnn was rated modestly better than DnCNN for signal contrast and overall quality (both raters gave PScnn 2.76 for overall quality and 2.0 for image contrast). Notably DnCNN was not significantly different from a single average FGATIR image in image contrast owing to a large variance in ratings for this denoiser. All evaluated denoisers were shown to have significantly higher overall image quality compared to a single average FGATIR, with both DnCNN and PScnn having p-values less than 0.01. All denoisers were found to be significantly “artificial” looking according to raters compared to images that were averaged directly to boost SNR.
Raters showed a large degree of consistency and agreement when comparing the effect of denoising to that of averaging. However, rater agreement varied depending on the metric being evaluated. Intraclass correlation coefficients showing the degree of consistency among raters was 85% in contrast resolution, 54% in signal homogeneity, 38% in artificiality, and 71% in overall image quality.
The systems, methods and computer-accessible medium according to the exemplary embodiments of the present disclosure can provide two fundamental findings. A) Regularized MSE losses have the capacity to decrease blurring introduced during supervised denoising tasks. Exemplary results shown in
Further, exemplary system, method and computer accessible medium according to the exemplary embodiments of the present disclosure can be provided to increase the clinical feasibility of FGATIR data. The systems, methods and computer-accessible medium according to the exemplary embodiments of the present disclosure can accomplish this through supervised CNN denoising, and
For an exemplary optimal denoiser, embodiments may anticipate the energy of normalized residuals to be unity at all frequencies, where energy less than 1 can indicate that the filter is removing too much information and greater than 1 indicates the filter can be adding unwanted information. Regularizing the blind network using the power spectrum aids in ensuring unity power spectra. In addition, as SNR increases, the network denoises low spatial frequency information very little, since the data there is already smooth.
As shown in Table 1, PSNR values tend to be unreliable in evaluating the performance of denoising images with real world MRI noise. This is likely an outcome of the fact that there is no true ground truth for data with real world noise. In exemplary embodiments, the “ground truth” for real world noise may come from data that has been registered and averaged together. Therefore, this clean image can have signal features that are derived from other noisy datasets, and likely include some blurring from registration uncertainty. Therefore, pSNR may not be the best measure of performance for real world noise, instead,
One additional advantage of power spectrum regularization is that it does not explicitly require supervision during training. In exemplary embodiments of the present disclosure, the MSE provides only data fidelity. However, according to other exemplary embodiments of the present disclosure, it is possible to combine PS based loss with Stein's unbiased risk estimator (SURE) (Stein 1981) or MSURE (see, e.g., Ramani, Blu et al. 2008) unsupervised losses to investigate how the power spectrum impacts noise modeling without an a-priori know ground truth MRI dataset.
PScnn denoising gave a low quality single average FGATIR image the quality of ˜2 averages (e.g., factor of 2 increase in SNR) according to expert neuroradiologist evaluation. PScnn performed better in the spectral domain, implying that this denoiser is performing less smoothing, and can be more externally valid for clinical situations. Combining this exemplary CNN with model-based approaches, switching to undersampled data (e.g. compressed sensing), or combining with unsupervised training methods may further increase the effectiveness, efficiency and feasibility of FGATIR for clinical investigations of pathology and functional neurosurgery targeting in subcortical structures.
As shown in
Further, the exemplary processing arrangement 705 can be provided with or include an input/output ports 735, which can include, for example a wired network, a wireless network, the internet, an intranet, a data collection probe, a sensor, etc. As shown in
The foregoing merely illustrates the principles of the disclosure. Various modifications and alterations to the described embodiments will be apparent to those skilled in the art in view of the teachings herein. It will thus be appreciated that those skilled in the art will be able to devise numerous systems, arrangements, and procedures which, although not explicitly shown or described herein, embody the principles of the disclosure and can be thus within the spirit and scope of the disclosure. Various different exemplary embodiments can be used together with one another, as well as interchangeably therewith, as should be understood by those having ordinary skill in the art. In addition, certain terms used in the present disclosure, including the specification, drawings and claims thereof, can be used synonymously in certain instances, including, but not limited to, for example, data and information. It should be understood that, while these words, and/or other words that can be synonymous to one another, can be used synonymously herein, that there can be instances when such words can be intended to not be used synonymously. Further, to the extent that the prior art knowledge has not been explicitly incorporated by reference herein above, it is explicitly incorporated herein in its entirety. All publications referenced are incorporated herein by reference in their entireties.
The following references are hereby incorporated by reference, in their entireties:
This application relates to and claims priority from U.S. Patent Application No. 63/340,391, filed on May 10, 2022, the entire disclosure of which is incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
63340391 | May 2022 | US |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/US2023/021725 | May 2023 | WO |
Child | 18942745 | US |