The present invention relates to processing of Magnetic Resonance Velocimetry (MRV) data, including for example to reconstruction of noisy or incomplete data. In addition, techniques described herein relate to compression and decompression of flow fields extracted from such data.
Experimental measurements of fluid flows inside or around an object often produce velocity images which contain noise. For example, magnetic resonance velocimetry (MRV) (Fukushima 1999; Mantle & Sederman 2003; Elkins & Alley 2007) can measure all three components of a time varying velocity field as set out in “Nuclear magnetic resonance as a tool to study flow” Annual Review of Fluid Mechanics 31, 95-123 (1999), “Dynamic MRI in chemical process and reaction engineering”. Progress in Nuclear Magnetic Resonance Spectroscopy 43 (1), 3-60 (2003), and “Magnetic resonance velocimetry: applications of magnetic resonance imaging in the measurement of fluid motion” Experiments in Fluids 43 (6), 823-858 (2007). However, the measurements become increasingly noisy as the spatial resolution is increased.
To achieve an image of acceptable signal to noise ratio (SNR), repeated scans are often required, leading to long signal acquisition times. To address that problem, fast acquisition protocols (pulse sequences) can be used, but these may be difficult to implement and can lead to artefacts depending on the magnetic relaxation properties and the magnetic field homogeneity of the system studied. Another way to accelerate signal acquisition is by using sparse sampling techniques in conjunction with a reconstruction algorithm. The latter approach is an active field of research, commonly referred to as compressed sensing as discussed in “Compressed sensing” IEEE Transactions on Information Theory 52 (4), 1289-1306 (2006), “Sparse MRI: The application of compressed sensing for rapid MR imaging” Magnetic Resonance in Medicine 58 (6), 1182-1195 (2007), “Phase reconstruction from velocity-encoded MRI measurements—A survey of sparsity-promoting variational approaches” Journal of Magnetic Resonance 238, 26-43 (2014), “Joint phase reconstruction and magnitude segmentation from velocity-encoded MRI data” pp. 1-22, arXiv: 1908.05285 (2019).
Compressed sensing algorithms exploit a priori knowledge about the structure of the data, which is encoded in a regularization norm (e.g. total variation, wavelet bases), but without considering the physics of the problem.
For images depicting fluid flow, a priori knowledge can come in the form of a Navier-Stokes problem. The problem of reconstructing and segmenting a flow image then can be expressed as a generalized inverse Navier-Stokes problem whose flow domain, boundary conditions, and model parameters have to be inferred in order for the modelled velocity to approximate the measured velocity in an appropriate metric space. This approach not only produces a reconstruction that, by construction, is an accurate fluid flow inside or around the object (a solution to a Navier-Stokes problem), but also provides additional physical knowledge (e.g. pressure), which is otherwise difficult to measure.
Inverse Navier-Stokes problems have been intensively studied during the last decade, mainly enabled by the increase of available computing power. Recent applications in fluid mechanics range from the forcing inference problem (discussed in “Determining white noise forcing from Eulerian observations in the Navier-Stokes equation” Stochastics and Partial Differential Equations: Analysis and Computations 2 (2), 233-261 (2014)), to the reconstruction of scalar image velocimetry (SIV) (see “A space-time integral minimisation method for the reconstruction of velocity fields from measured scalar fields” Journal of Fluid Mechanics 854, 348-366 (2018); “Analytic reconstruction of a two-dimensional velocity field from an observed diffusive scalar” Journal of Fluid Mechanics 871, 755-774, arXiv: 1904.04919 (2019)) and particle image velocimetry (PIV) (“Data assimilation method to de-noise and de-filter particle image velocimetry data” Journal of Fluid Mechanics 877, 196-213 (2019)) signals, and the identification of optimal sensor arrangements (“Optimal sensor placement for variational data assimilation of unsteady flows past a rotationally oscillating cylinder” Journal of Fluid Mechanics 823, 230-277 (2017); “Optimal sensor placement for artificial swimmers” Journal of Fluid Mechanics 884, arXiv: 1906.07585 (2019)). Regularization methods that can be used for model parameters are reviewed in “Inverse problems: A Bayesian perspective” Acta Numerica 19 451-459. (2010) from a Bayesian perspective and in “Modern regularization methods for inverse problems” Acta Numerica 27, 1-111, arXiv: 1801.09922 (2018) from a variational perspective. The well-posedness of Bayesian inverse Navier-Stokes problems is addressed in “Bayesian inverse problems for functions and applications to fluid mechanics” Inverse Problems 25 (11) (2009).
Recently, “Boundary control in computational haemodynamics” Journal of Fluid Mechanics 847, 329-364 (2018) treats the reduced inverse Navier-Stokes problem of finding only the Dirichlet boundary condition for the inlet velocity that matches the modelled velocity field to MRV data for a steady 3D flow in a glass replica of the human aorta. The authors measure the model-data discrepancy using the L2-norm and introduce additional variational regularization terms for the Dirichlet boundary condition. The same formulation is extended to periodic flows in “Fourier Spectral Dynamic Data Assimilation: Interlacing CFD with 4D Flow MRI.” Tech. Rep. Research Report No. 2019-56, ETH Zurich. (2019) and in “Harmonic balance techniques in cardiovascular fluid mechanics” In Medical Image Computing and Computer Assisted Intervention—MICCAI 2019, pp. 486-494. Cham: Springer International Publishing (2019), which uses the harmonic balance method for the temporal discretization of the Navier-Stokes problem. “Variational data assimilation for transient blood flow simulations: Cerebral aneurysms as an illustrative example” International Journal for Numerical Methods in Biomedical Engineering 35 (1), 1-27 (2019) addresses the problem of inferring both the inlet velocity (Dirichlet) boundary condition and the initial condition, for unsteady blood flows and 4D MRV data, with applications to cerebral aneurysms.
The above studies consider rigid boundaries and require a priori an accurate, and time-averaged, geometric representation of the blood vessel or other boundary containing the fluid. To find the shape of the flow domain, e.g. the blood vessel boundaries, computed tomography (CT) or magnetic resonance angiography (MRA) is often used. The acquired image is then reconstructed, segmented, and smoothed. This process not only requires substantial effort and the design of an additional experiment (e.g. CT, MRA), but it also introduces geometric uncertainties (“Computational fluid dynamics modelling in cardiovascular medicine” Heart 102 (1), 18-28 (2016); “Uncertainty quantification in coronary blood flow simulations: Impact of geometry, boundary conditions and blood viscosity” Journal of Biomechanics 49 (12), 2540-2547 (2016)), which, in turn, affect the predictive confidence of arterial wall shear stress distributions and their mappings (“Wall Shear Stress: Theoretical Considerations and Methods of Measurement” Progress in Cardiovascular Diseases 49 (5), 307-329 (2007)).
For example, “Variational data assimilation for transient blood flow simulations: Cerebral aneurysms as an illustrative example” International Journal for Numerical Methods in Biomedical Engineering 35 (1), 1-27 (2019) reports discrepancies between the modelled and the measured velocity fields near the flow boundaries, which the authors suspect are caused by geometric errors that were introduced during the segmentation process. In general, the assumption of rigid boundaries either implies that a time-averaged geometry has to be used, or that an additional experiment (e.g. CT, MRA) has to be conducted to register the moving boundaries to the flow measurements.
This application addresses the problem of simultaneous reconstruction and segmentation of velocity fields, to attempt to overcome some or all of the drawbacks of the disclosures discussed above.
The present application provides a more consistent approach to the problem by treating the blood vessel geometry as an unknown when solving a generalized inverse Navier-Stokes problem. In this way, the inverse Navier-Stokes problem simultaneously reconstructs and segments the velocity fields and can better adapt to the MRV experiment by correcting the geometric errors and improving the reconstruction.
In this application the problem of simultaneous reconstruction and segmentation of velocity fields is addressed by formulating a generalized inverse Navier-Stokes problem, whose flow domain, boundary conditions, and model parameters are all considered unknown. To regularize the problem, a Bayesian framework and Gaussian measures in Hilbert spaces are used. This further allows estimation of the posterior Gaussian distributions of the unknowns using a quasi-Newton method, and thereby to estimate the uncertainties of the unknowns by approximating their posterior covariance with the quasi-Newton method, which has not yet been addressed for this type of problem. An algorithm (or method) for the solution of this generalized inverse Navier-Stokes problem is provided. Detailed demonstrations on synthetic images of 2D steady flows and real MRV images of a steady axisymmetric flow are provided to highlight the quality of the reconstruction.
The methods described herein formulate and solve a generalized inverse Navier-Stokes problem for the joint reconstruction and segmentation of noisy flow velocity images. This amounts to a reduction of the total scanning time. In some examples shown below, there is a reduction in scanning time by a factor of 27, for example. At the same time, the method provides additional knowledge about the physics of the flow (e.g. pressure), and addresses the shortcomings of MRV (low spatial resolution and partial volume effect) that hinder the accurate estimation of wall shear stresses using existing methods. The images derived by the methods disclosed herein may be further post-processed in order to either reveal obscured flow patterns or to extract a quantity of interest (e.g. pressure, wall shear stress, etc.).
Disclosed herein is a computer implemented method for reconstruction of magnetic resonance velocimetry, MRV, data, the method comprising the steps of: (a) receiving MRV data encoding information relating to a fluid velocity field, u*, within a subset of the MRV data forming a region, Ω, enclosed by a boundary, ∂Ω, the boundary including at least a physical boundary portion, Γ, and optionally further including an inlet portion, Γi, and/or an outlet portion, Γo; and wherein the region Ω is a subset of an imaging region, I, over which the MRV data encodes information; (b) receiving initial values for a set of unknown parameters, {x}, including at least: a shape of the boundary, ∂Ω, within which the fluid is located; a kinematic viscosity of the fluid, v, and optionally an inlet boundary condition for the fluid, gi where an inlet portion exists; and/or an outlet boundary condition for the fluid, go where an outlet portion exists; (c) calculating a model fluid velocity field, uo, using the initial values as inputs to a Navier-Stokes problem, and solving for the velocity field within and at the boundary; (d) constructing an error functional, of the form:
in which is a penalty term which penalises deviations in uo from the received MRV data;
is a penalty term which penalises outcomes for in uo which arise from statistically unlikely values for the parameters in the set of unknown parameters, {x}, and
is a penalty term which penalises deviations in uo from the Navier Stokes problem; (e) determining a generalised gradient of
with respect to each of the unknown parameters in the set {x}; (f) for each of the unknown parameters in the set {x}, using the generalised gradient for that parameter, evaluated at the initial value of all of the parameters, to determine a direction in which perturbing each parameter reduces the magnitude of
; and thereby determining improved values for each of the unknown parameters in the set {x}; and (g) reconstructing the MRV data by outputting the improved values from step (f) and calculating the reconstructed velocity, u, across at least the region Ω.
The MRV data may be supplied “live”, i.e. from a measurement taken just prior to the method being enacted on that data, or the method may be performed on MRV data taken at an earlier time, at a different location, etc. Receiving initial values can take many forms. For example, they may be supplied by a user (e.g. in the form of an estimate or best guess), they may be derived from the data by calculation. For example estimates for the boundary location and inlet boundary condition can be obtained from the MRV data itself. The method (sometimes referred to as the algorithm) may include features which automatically estimate the boundary location and/or inlet boundary condition from the MRV data and propose a suitable initial value, or a user may use the data to extract the initial values manually.
“Values” is used here in the most general sense, and includes e.g. functions for boundary conditions and topographical information relating to the location and/or shape of the boundary. In general, “values” refers to a set of values defined over a domain (e.g. along a boundary, or portion of boundary, across the entire imaging area, etc.). Improved values means values which are improved relative to the initial guess. “Improved” in this sense means values for the parameters {x} which result in a lower magnitude of , relative to the magnitude of
calculated with the initial values for the parameters.
In some cases (e.g. 2D imaging of a tube viewed with its axis into/out of the imaging plane, modelling of a complete 3D system), there may not be an inlet or an outlet, and it is in this sense that the inclusion of inlet and/or outlet boundary conditions is optional.
The above procedure uses physics considerations (via the Navier-Stokes equations) and statistical likelihood to reduce the space of suitable reconstructions. In general the direction of change of the unknown parameters which causes a reduction in is calculated independently for each parameter, and the collective result of changing all the parameters in their respective direction is assessed by considering whether the collective result also reduces the magnitude of
. Where a full global inverse Hessian reconstruction is possible to calculate, the directions of change of each parameter could be chosen to guarantee that the collective effect of all changes in parameters is to reduce the magnitude of
.
It is an interesting and unexpected effect that starting with less information (that is by also treating the boundary location as unknown and allowing this to be optimised along with the other unknown parameters) allows for improved image reconstruction. This is important because MRV data is time consuming to obtain and obtaining low noise data is yet more time consuming as many scans must be made and averaged. By providing an efficient and reliable reconstruction algorithm, the overall time to acquire low-noise MRV data can be reduced substantially.
Optionally, the MRV data is raw data in the form of a series of incomplete frequency space data sets, sj; wherein: the method includes the step of calculating the measured velocity field, u*, by performing an inverse Fourier transform on each sj, F−1 s and deriving u* from the arguments of F−1 sj; the construction of the error functional modifies the penalty term to:
in which j is an index representing each data set, d is the number of physical dimensions in which reconstruction is to occur,
represents a Euclidean norm in the space of complex numbers () representing a multivariate normal distribution, and in which σ is the variance of that distribution,
is a sparse sampling operator mapping reconstructed data back to the sparse raw data space,
is the Fourier transform operator, ρj is the reconstructed magnitude of the jth data set, e is Euler's constant, i=√{square root over (−1)} and φj is the reconstructed phase of the jth data set; the error functional,
, has the form:
in which is a segmentation term for partitioning the raw data and in which the additional term
has the form:
where S is the projection from model space to data space, k is an index indicating the physical direction of velocity which is being regularised, d is the number of physical dimensions in which reconstruction is to occur, and Cu
This allows a further reduction in time to acquire low noise reliable MRV images, since the MRV data to be processed by the method need not even be fully sampled in k-space because the method can be extended to reconstruct the missing k-space information as well as to derive physically and statistically realistic parameters for the unknown parameters.
Raw data is taken in four scans at different magnetic field gradients per spatial dimension, which is required to extract the phase information (which in turn allows extraction of velocimetry information). Raw data in this example therefore comprises four (incomplete) scans in frequency space per spatial dimension being imaged, i.e. 4, 8, or 12 scans for 1D, 2D, or 3D imaging.
In some cases, this k-space reconstruction may be thought of as a computer implemented method for reconstruction of magnetic resonance velocimetry, MRV, data, the method comprising the steps of: (a) receiving raw MRV data in the form of a series of incomplete frequency space data sets, sj, encoding information relating to a fluid velocity field, u*, within a subset of the MRV data forming a region, Ω, enclosed by a boundary, ∂Ω, the boundary including at least a physical boundary portion, Γ, and optionally further including an inlet portion, Γi, and/or an outlet portion, Γo; and wherein the region Ω is a subset of an imaging region, I, over which the MRV data encodes information; (b) receiving initial values for a set of unknown parameters, {x}, including at least: a shape of the boundary, ∂Ω, within which the fluid is located; a kinematic viscosity of the fluid, v, and optionally an inlet boundary condition for the fluid, gi where an inlet portion exists; and/or an outlet boundary condition for the fluid, go where an outlet portion exists; (c) calculating a model fluid velocity field, uo, using the initial values as inputs to a Navier-Stokes problem, and solving for the velocity field within and at the boundary and calculating the measured velocity field, u*, by performing an inverse Fourier transform on each sj, s and deriving u* from the arguments of
sj; (d) constructing an error functional,
, of the form:
in which is a penalty term which penalises deviations in uo from the received MRV data;
is a penalty term which penalises outcomes for in uo which arise from statistically unlikely values for the parameters in the set of unknown parameters, {x},
is a penalty term which penalises deviations in uo from the Navier Stokes problem,
is a segmentation term for partitioning the raw data and in which
it is a penalty term which penalises deviations in uo from the calculated measured velocity field u*; (e) determining a generalised gradient of
with respect to each of the unknown parameters in the set {x}; (f) for each of the unknown parameters in the set {x}, using the generalised gradient for that parameter, evaluated at the initial value of all of the parameters, to determine a direction in which perturbing each parameter reduces the magnitude of
; and thereby determining improved values for each of the unknown parameters in the set {x}; and (g) reconstructing the MRV data by outputting the improved values from step (f) and calculating the reconstructed velocity, u, across at least the region Ω and reconstructing a phase and magnitude of the received raw data from the reconstructed velocity field, u.
That is, in this example u* is not an input, but the process begins at a step prior to u* being derived from the raw data (this being derived by inverse Fourier transform).
Optionally, has the form:
where S is the projection from model space to data space in which, k is an index indicating the physical direction of velocity which is being regularised, d is the number of physical dimensions in which reconstruction is to occur, and Cu
Optionally, the penalty term has the form:
where n is the number of samples (and is less than the dimension of frequency space in which samples are taken, which defines the “sparse sampling” condition) j is an index representing each data set, d is the number of physical dimensions in which reconstruction is to occur,
represents a Euclidean norm in the space of complex numbers representing a multivariate normal distribution, and in which σ is the variance of that distribution, is a sparse sampling operator mapping reconstructed data back to the sparse raw data space,
is the Fourier transform operator, ρj is the reconstructed magnitude of the jth data set, e is Euler's constant, i=√{square root over (−1)} and φj is the reconstructed phase of the jth data set.
Either of the above methods may incorporate any of the following additional refinements.
Optionally, the Navier-Stokes problem constrains u such that u satisfies:
wherein p is the reduced pressure, p0/ρ, ρ is a density of the fluid, p0 is the pressure, n is a unit normal vector on ∂Ω, and ∂n≡n·∇ is a normal derivative on ∂Ω.
This format ensures that the physics of fluid flows is captured and the possible results space is thereby constrained to physically realistic solutions.
Optionally the penalty term includes Nitsche penalty terms for weakly imposing a Dirichlet boundary condition on at least a portion of the boundary, ∂Ω=Γ∪Γi∪Γo.
This allows the fluid flow to be modelled accurately by implementing appropriate boundary conditions at specific parts of the boundary.
Optionally, the error functional, is minimised using Euler-Lagrange equations, constrained to spaces of admissible perturbations in u and p, denoted u′ and p′ respectively, where p is the reduced pressure, p0/ρ, where ρ is a density of the fluid and p0 is the pressure.
Euler-Lagrange equations are a powerful method, suitable for identifying the directions in which input parameters should be perturbed in order to reduce the error functional.
The Euler-Lagrange equations may be formed by considering the first variations of the and
penalty terms with respect to {x} u and p.
This allows the effect of perturbations in the parameters which affect the and
penalty terms to be considered. Since the
penalty term does not depend on u and p, there is no variation of
with respect to these parameters.
First variations of the and
penalty terms may be used to provide an adjoint Navier-Stokes problem:
in which v is an adjoint velocity and q an adjoint pressure, (∇v)† represents the adjoint of (∇v), and Du is a generalised gradient of
with respect to u.
The adjoint problem is a useful step in elucidating the effect of changes in the input parameters on the error functional. By solving the adjoint equation the generalised gradients can be efficiently calculated.
Optionally, the calculation of the penalty term is based on the norm of the difference between the MRV velocity data, u*, and the reconstructed velocity field, u, mapped onto the space of the MRV velocity data, the norm being based on a covariance operator. This is a simple yet effective means by which the reconstructed velocity data can be constrained to be consistent with the measured MRV data.
The MRV data may be assumed to have a Gaussian noise variance and the penalty term is calculated as:
in which ∥ . . . ∥C represents a norm in a covariance-weighted L2 space having covariance operator C, S projects from the model space to the data space, and the covariance operator Cu* introduces the Gaussian noise variance into the penalty term. This format provides a clear and well-behaved model for noise inherent in the MRV data.
Optionally the calculation of the penalty term includes assuming that some or all of the unknown parameters follows a Gaussian distribution of likelihood and improbable realisations are penalised by forming
as a linear sum of an error functional term for each unknown parameter in the set {x} which is assumed to have a Gaussian distribution of the form:
in which x represents any of the unknown parameters which is assumed to have a Gaussian distribution,
Once more, the Gaussian format is one which results in clear and well-behaved functions for modelling and provides a firm statistical basis for constraining the space of possible reconstructions to statistically plausible ones.
Optionally the covariance operator for the boundary condition at the inlet, Cg
in which σg is a characteristic length scale of an exponential covariance function, I is the identity matrix and {tilde over (Δ)}i is the Laplacian operator extended to incorporate the boundary condition that gi=0 on Γi, and/or the covariance operator for the boundary condition at the outlet, Cg
in which σg is a characteristic length scale of an exponential covariance function, I is the identity matrix and {tilde over (Δ)}o is the Laplacian operator extended to incorporate the boundary condition that ∂ngo=0 on Γo.
These forms for covariance operators are suitable for encoding the boundary conditions into the respective penalty terms and which in turn ensure that the space of possible reconstructions is constrained to appropriate outputs.
Optionally the shape of the boundary is perturbed under a speed field =ζn, in which ζ represents the magnitude of the speed at which points on the boundary are perturbed and n is the unit vector normal to each point on the boundary. This provides a convenient formalism to describe the location of the boundary and to implement changes in the location of the boundary. As an example, the perturbations of the shape of the boundary may be implemented using Reynold's transport theorem.
Optionally, any boundary which is an inlet or an outlet is not perturbed. This may be a useful feature to reduce the complexity of the perturbations in cases where, for example, the inlet and/or the outlet aligns with the edge of the imaging region, so that the location of this part of the boundary is defined as being located in the same place (at the edge of the imaging area).
Optionally the perturbation of the shape of the boundary is used to derive a shape derivatives problem defined over the permissible perturbations of u and p, denoted u′ and p′, respectively, the shape derivatives problem having the form:
The solutions to this problem provide a convenient framework for assessing the effect of the boundary shape perturbations on the other parameters.
Optionally the boundary is represented implicitly by signed distance functions, φ± defined across the imaging region, I, in which the boundary is represented by parts of φ± which are equal to zero, and the magnitude of φ± at a given point represents a shortest distance between that point and the boundary.
For example, the signed distance function may be negative inside the boundary (region Ω), and positive outside that region. The other way around (positive inside, negative outside) is of course possible if appropriate corresponding changes of sign are included in later expressions derived from the signed distance function. This implicit representation of the boundary location provides a simple yet powerful method of deforming (perturbing) the boundary and determining the effect of the perturbations on the other parameters.
The normal vector, n, may be propagated across a region corresponding to the full set of MRV data to form a normal vector field, n which runs perpendicular to contours of equal magnitude of φ±. Similarly, the shape gradient may be extended away from the boundary to form an extended shape gradient, {dot over (ζ)}, covering the full set of MRV data, wherein the extension of the shape gradient comprises propagating the shape gradient along the normal vector field lines according to a convection-diffusion equation.
This propagation extends the domain of the shape gradient and the normal vector from their definitions only on the boundary to the entire imaging region. This in turn provides a general framework for understanding the effect of the boundary being located in any part of the imaging region.
Optionally the direction of perturbation of each parameter is in a direction of steepest change in . By perturbing the parameters in their direction of steepest change, the largest effect will be seen on the magnitude of
. In other words, this may maximise the improvement (maximise the reduction in
) seen in making a change to a particular value of a parameter. Note that in some cases, the appropriate direction of change may not be exactly aligned with the direction of steepest change in
, since considering the effect of each parameter on
individually ignores the combined effect of changing multiple parameters at once.
Various approaches can be used to determine the “direction of steepest change”, for example, the “steepest descent” method, the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm, conjugate gradient methods, and so forth.
Optionally, the direction of steepest change of is derived by reconstructing an inverse Hessian {tilde over (H)} matrix for each unknown parameter in the set {x}, and wherein the direction of steepest change of
is preconditioned by the reconstructed inverse Hessian matrix.
The direction of steepest change may be thought of as the steepest descent direction, which is given by the generalised gradients of . with respect to the unknown parameters. The search direction for each of the unknown parameter in the set {x} may be derived by reconstructing an approximated Hessian matrix.
Optionally the perturbation direction, s, for each individual unknown parameter in the set {x} is given by −{tilde over (H)}x{circumflex over (D)}x where {circumflex over (D)}x is the steepest ascent direction of
for perturbations in parameter x, and {tilde over (H)}x is the reconstructed inverse Hessian of each unknown parameter in the set {x}, for example.
The reconstructed inverse Hessian of each unknown parameter in the set {x} may be used to provide an approximated covariance matrix for the unknown parameters thereby to estimate the uncertainties in the model parameters. The estimation of uncertainties is important for validating the output result. For example, the uncertainty can be used to assess an appropriate level of trust in the data. Thus, not only is a user provided with a reconstructed velocity field which can be used as the basis for making decisions (whether and how to operate on a patient where the data is medical, how to best optimise design of a system for e.g. mechanical systems, etc.), but also with an indication of how likely their decision is to be based on correct information. Where the errors are too large, the user may be faced with too great an uncertainty in the output data and may elect instead to repeat the data acquisition and/or the processing (e.g. with different weightings and model parameters) to try to improve the situation.
Optionally the reconstructed velocity, u, is updated by solving an Oseen problem for the improved parameters:
in which k is an index to distinguish input values at k and improved values at k+1, p is the reduced pressure, p0/ρ, ρ is a density of the fluid, p0 is the pressure, n is a unit normal vector on ∂Ω, and ∂n≡n·∇ is a normal derivative on ∂Ω.
The updated parameters set, {x}k+1, does not contain the updated velocity field uk+1. The updated model parameters {x}k+1 are used to compute the updated velocity field uk+1. Since the Navier-Stokes problem is nonlinear, the Oseen problem, which is a linearization of the Navier-Stokes problem around uk, is considered instead. To make an analogy: if the reconstructed velocity field u is a parametric curve, the set {x} contains its parameters. Computation of u(xk+1) is necessary to find the new uk+1.
Optionally steps (c) to (f) are repeated one or more times whereby the improved values of the unknown parameters from a preceding iteration are used in place of the initial values in each subsequent execution of steps (c) to (f); and wherein step (g) uses the improved values from the most recent iteration to calculate the reconstructed u.
By iterating the procedure, a gradual improvement in the output is seen. Over time, the process is expected to converge to a particular output, at which point the procedure fails to continue to improve the outputs (i.e. fails to continue reducing the magnitude of ).
Optionally, each unknown parameter is adjusted a respective first step size in its respective direction during the first iteration of the method and a subsequent step size during each subsequent iteration of the method.
As noted above the direction in which each parameter is perturbed may be determined using various methods. To consider two of these: (i) simple gradient descent; and (ii) reconstructing an inverse Hessian matrix and use a preconditioned gradient descent (the BFGS algorithm). The second option allows the estimation of uncertainties, while the first does not.
If the user selects the simple gradient descent, the strategy for the step size may be chosen as follows: start with a step size t and if the line search is unsuccessful set tk+1=0.5*tk and repeat the line search. If the line search is successful set tk+1=1.5tk.
If the user selects BFGS, start every line-search with t=1 (i.e. each iteration's line search starts with t=1). If the line-search fails, set tk+1=0.5tk and repeat the line search.
Optionally, each subsequent step size is chosen to increase the rate at which is reduced.
For example, where the rate at which is reduced is too slow, the user may elect to (temporarily) increase the step size. In this way an adaptive step size can be selected to cause
to reduce as quickly as possible.
In other cases, the step size may be constrained to always reduce in subsequent iterations from the step size of the previous iteration. For example, the step change may reduce each time e.g. constant fraction, e.g. always ½, ⅓, ⅔ etc.
Optionally, steps (c) to (f) of the method are repeated until a stop condition is met. Optionally the stop condition is that does not reduce further or where a covariant weighted norm for the perturbations in the unknown parameters is below a user-specified threshold.
Optionally the method includes outputting values of p, the pressure field, and the improved values for the unknown parameters in the set {x} as part of step (g).
Optionally, the reconstruction uses an immersed boundary finite element method. This is a suitable framework in which to apply the methods described herein because it is a natural environment in which to apply the methods, which is also well-studied and mathematically rigorous.
Optionally, a magnitude value of Magnetic Resonance Imaging (MRI) data is used as an input to provide an initial value for the shape of the boundary, ∂Ω. MRI is a useful technique for imaging of a wide range of materials such as those from which a boundary may be formed and fluids which may be flowing within the boundary. MRI is non-invasive and is therefore particularly suitable for biological applications, including in vivo measurements. By directly measuring the shape of the boundary, good input data can be provided to improve the certainty and speed of the convergence process. As noted elsewhere in this document MRV data requires at least four measurements per dimension of the region of interest. By contrast, MRI data on the equivalent region can be derived by a single raster scan of the pixels (or voxels) in the region of interest.
Optionally, the magnitude value of the magnetic resonance velocimetry data is extracted from the magnetic velocimetry data. This allows the location of the boundary to be extracted from the exact same data set as that from which the flow data is extracted, thereby improving the correspondence between the initial value and the “true” location of the boundary. In cases where the same input MRI data are used for the velocimetry (e.g., phase contrast MRI) as for the initial boundary location (the magnitude of the MRI signal(s)), the convergence rate of the methods set out herein is improved since a very reliable prior for the boundary shape/location is provided. In some cases, since the MRV dataset includes multiple measurements to extract the flow information, the magnitude data may be used to extract the boundary location from more than one or even from all measurements forming part of the MRV dataset to improve confidence in the input shape of the boundary. Optionally suitable averaging or other ways of combining the information may be used to improve the accuracy of the boundary location when multiple data sets are used to derive the boundary location. This results in a higher signal-to-noise ratio for boundary locations extracted in this manner. In yet further examples the magnitude value of the magnetic resonance data is extracted from a separate magnetic resonance imaging dataset.
The methods set out above may further comprise a step of providing a compressed form of the modelled flow field, the compressed form being represented by the shape of the boundary, the kinematic viscosity, and the inlet and/or outlet boundary conditions where an inlet and/or outlet exists. Optionally, this compression step may further comprise discarding the modelled flow field and retaining only the compressed form. Once the compressed form is provided, a further decompression step may optionally further include decompressing the compressed form by: using ∂Ω, v, gi and go as inputs to a Navier-Stokes problem; and calculating the modelled flow field by solving the Navier Stokes problem to derive a flow field consistent with the inputs and the Navier-Stokes equations. The decompression may be performed in a remote location and/or at a later time relative to the compression step, for example. The benefits of the compression and decompression aspects are set out in more detail below.
Also disclosed herein is a computer implemented method for compressing flow data of a fluid, the method comprising the steps of: receiving raw data encoding information in an imaging region I relating to a fluid velocity field, u*, within a subset of the imaging region I forming a region, Ω, enclosed by a boundary, ∂Ω, the boundary including at least a physical boundary portion, Γ, and optionally further including an inlet portion, Γi, and/or an outlet portion, Γo; extracting from the data a modelled flow field over the region Ω, a shape of the boundary ∂Ω, enclosing the region Ω, a boundary condition, gi, at an inlet where an inlet exists, a boundary condition, go, at an outlet where an outlet exists and a kinematic viscosity, v, of the fluid; and providing a compressed form of the modelled flow field, the compressed form being represented by the shape of the boundary, the kinematic viscosity, and the inlet and/or outlet boundary conditions where an inlet and/or outlet exists.
As will be clear, any flow field can be compressed by extracting the set of parameters {x}≡{∂Ω; gi; go; v} from the flow data. This represents a large reduction in the space needed for storage of the flow field, in effect trading off reduced storage space (or transmission bandwidth) for increased processor usage in retrieving the full flow field because instead of simply loading the flow field data, the system must recreate it by solving a Navier-Stokes problem, using the compressed form as inputs. It can be seen in this way that the compression and decompression methods (and the corresponding compression and decompression hardware and software) operate as a plurality of interrelated products in the sense that they are intended to interoperate and to be used in conjunction with one another.
That the dataset of the compressed data is smaller than the full flow field can be seen from an analysis of the dimensionality of the system. A full flow field over the region Ω requires storing flow vectors (at the desired resolution/accuracy) for all points across the region. Note that where a full 3D analysis is performed, Ω is a 3D volume, while where the analysis is 2D, Ω is an area. By contrast the boundary ∂Ω of a 3D Ω is a 2D surface (albeit usually a curved one requiring 3 dimensions to represent) while the boundary ∂Ω of a 2D Ω is a 1D line (albeit usually a curved one requiring 2 dimensions to represent). It is clear that the boundary conditions also follow this pattern. In other words, where the flow field is modelled over a region having dimensionality d leads to compressed form of the modelled flow field which has a dimensionality of d−1. Reducing the dimensionality vastly reduces the complexity of the storage requirements of a d-dimensional flow field from O(nd) to O(nd-1) where the kinematic viscosity is assumed to be constant over the region so has an effective dimensionality of 0.
Note that usually, the kinematic viscosity, v, is not a function of space, it is either a constant or a tensor (matrix) that depends on the velocity and certain physical constants. In certain cases, the viscosity may be more complex than this, for example in turbulence modelling, viscosity may be effectively variable in space. However, this variation in space would be controlled by a turbulence model which once more contains just a few physical parameters. Therefore, even though turbulent flows are not the focus of the present work, for under-resolved turbulent flow the compression/decompression methods set out herein would apply in the same general manner but with an additional a turbulence model. This slightly increases the complexity of the v term in the compressed flow, but nevertheless results still in a large reduction in the storage and/or bandwidth requirements for storage and/or transmission respectively of the compressed flow field.
In this way, the richness of the flow field can be captured in a compressed form requiring far less storage or transmission capacity. In general the compression and decompression aspects set out herein can use the data provided by the MRV methods set out above, although this is not necessary. However, when the MRV methods set out herein are used, a synergistic benefit is seen. Since the methods for extracting flow filed data set out herein lead to a set of parameters {x} in which a high degree of confidence can be placed, the compression of this data is highly effective since the high-accuracy parameters derived in this way capture a large amount of information related to the flow field. Much of the same concepts can be used in decompressing the data later, as will be seen.
Once the data are compressed in this way, the modelled flow field may be discarded.
The method may be repeated on a series of received raw data sets, each raw data set representing the same system at different times. A full time evolution of a system represents a very large dataset and in some cases the data handling requirements of such a system may be prohibitive. By compressing each time step of the data much more can be captured of the time evolution of the system.
As noted above the raw data may be derived from a Magnetic Resonance Velocimetry measurement. In particular extracting the modelled flow field may include: receiving initial values for a set of unknown parameters, {x}, including at least: a shape of the boundary, ∂Ω, within which the fluid is located; a kinematic viscosity of the fluid, v, and optionally an inlet boundary condition for the fluid, gi where an inlet portion exists; and/or an outlet boundary condition for the fluid, go where an outlet portion exists. Receiving initial values of the unknown parameters {x} may further include performing a separate measurement of one or more of the parameters. An initial value of the shape of the boundary may be provided by a magnetic resonance imaging measurement. As noted above, this can improve the accuracy of the process. The method may further include calculating a model fluid velocity field, uo, using the initial values as inputs to a Navier-Stokes problem, and solving for the velocity field within and at the boundary. In some cases, method includes constructing an error functional, is constructed, having the form:
in which is a penalty term which penalises deviations in uo from the received MRV data;
is a penalty term which penalises outcomes for in uo which arise from statistically unlikely values for the parameters in the set of unknown parameters, {x}, and
is a penalty term which penalises deviations in uo from the Navier Stokes problem, and the method includes: determining a generalised gradient of
with respect to each of the unknown parameters in the set {x}; for each of the unknown parameters in the set {x}, using the generalised gradient for that parameter, evaluated at the initial value of all of the parameters, to determine a direction in which perturbing each parameter reduces the magnitude of
and thereby determining improved values for each of the unknown parameters in the set {x}; and reconstructing the MRV data by outputting the improved values for the unknown parameters and calculating the reconstructed velocity, u, across at least the region Ω.
In some cases (e.g. 2D imaging of a tube viewed with its axis into/out of the imaging plane, modelling of a complete 3D system), there may not be an inlet or an outlet, and it is in this sense that the inclusion of inlet and/or outlet boundary conditions is optional.
The above procedure uses physics considerations (via the Navier-Stokes equations) and statistical likelihood to reduce the space of suitable reconstructions. In general the direction of change of the unknown parameters which causes a reduction in is calculated independently for each parameter, and the collective result of changing all the parameters in their respective direction is assessed by considering whether the collective result also reduces the magnitude of
. Where a full global inverse Hessian reconstruction is possible to calculate, the directions of change of each parameter could be chosen to guarantee that the collective effect of all changes in parameters is to reduce the magnitude of
.
It is an interesting and unexpected effect that starting with less information (that is by also treating the boundary location as unknown and allowing this to be optimised along with the other unknown parameters) allows for improved image reconstruction. This is important because MRV data is time consuming to obtain and obtaining low noise data is yet more time consuming as many scans must be made and averaged. By providing an efficient and reliable reconstruction algorithm, the overall time to acquire low-noise MRV data can be reduced substantially. This allows high quality values for the parameters {x} to be derived, which in turn means that the compressed form of the modelled flow field retains a high level of accuracy.
The method steps may be repeated one or more times whereby the improved values of the unknown parameters from a preceding iteration are used in place of the initial values in each subsequent execution; and wherein the reconstructing step uses the improved values from the most recent iteration to calculate the reconstructed u. By iterating the procedure, a gradual improvement in the output is seen. Over time, the process is expected to converge to a particular output, at which point the procedure fails to continue to improve the outputs (i.e. fails to continue reducing the magnitude of ).
Each unknown parameter may be adjusted a respective first step size in its respective direction during the first iteration of the method and a subsequent step size during each subsequent iteration of the method. As noted above the direction in which each parameter is perturbed may be determined using various methods. To consider two of these: (i) simple gradient descent; and (ii) reconstructing an inverse Hessian matrix and use a preconditioned gradient descent (the BFGS algorithm). The second option allows the estimation of uncertainties, while the first does not. If the user selects the simple gradient descent, the strategy for the step size may be chosen as follows: start with a step size t and if the line search is unsuccessful set tk+1=0.5*tk and repeat the line search. If the line search is successful set tk+1=1.5tk. If the user selects BFGS, start every line-search with t=1 (i.e. each iteration's line search starts with t=1). If the line-search fails, set tk+1=0.5tk and repeat the line search.
Each subsequent step size may be chosen to increase the rate at which is reduced. For example, where the rate at which
is reduced is too slow, the user may elect to (temporarily) increase the step size. In this way an adaptive step size can be selected to cause
to reduce as quickly as possible.
In other cases, the step size may be constrained to always reduce in subsequent iterations from the step size of the previous iteration. For example, the step change may reduce each time e.g. constant fraction, e.g. always ½, ⅓, ⅔ etc.
Optionally, the method steps are repeated until a stop condition is met, optionally wherein the stop condition is that does not reduce further or where a covariant weighted norm for the perturbations in the unknown parameters is below a user-specified threshold.
Optionally, the method further comprises storing or transmitting to a remote location the compressed form of the modelled flow field.
A corresponding computer implemented method of decompressing a compressed modelled flow field of measured flow data of a fluid is also disclosed. The compressed flow data includes a shape of a boundary, ∂Ω, of a region, Ω, in which the flow field is defined, a kinematic viscosity, v, of the fluid, and an inlet boundary condition, gi, and/or an outlet boundary condition, go, where an inlet and/or outlet exists, and the decompression method comprising the steps of: using ∂Ω, v, gi and go as inputs to a Navier-Stokes problem; and calculating the modelled flow field by solving the Navier Stokes problem to derive a flow field consistent with the inputs and the Navier-Stokes equations. Note that this allows the storage of a full flow field in its compressed format, by storing the set of parameters. In particular, the compressed modelled flow field may be derived from the compression methods set out above. This is particularly advantageous as there are few, if any, other known methods of providing high quality information on the unknown parameters {x}. Consequently, the disclosure of the present document as a whole represents a complete solution which is simply not possible using methods for analysing flow data other than those set out herein. Moreover, since no existing methods exist which allow the data to be compressed in the manner set out above, the decompression step inherently and independently makes an inventive contribution to the art because decompression of compressed data derived from real world measurements simply isn't possible with a reasonable degree of accuracy unless high quality value for the parameters {x} are available.
As will be apparent, the compression method set out above may be followed by at least one of a transmission or a storage step of the compressed form of the flow field, followed by the decompression method set out above, applied to the compressed form of the flow field at a later time and/or a different location.
Also disclosed herein is an apparatus for reconstruction of magnetic resonance velocimetry, MRV, data, the apparatus comprising: a processor configured to perform any of the method steps set out above.
The apparatus may further comprise a magnetic resonance imaging system, arranged to provide the MRV data.
Also disclosed herein is a non-transient computer readable medium comprising instructions which cause a computer to enact any of the method steps set out above.
The systems and methods described above may be applied to any MRV data, for example to medical MRV data, but also to non-medical MRV data (e.g. petrochemical, monitoring of flows in pipelines and other fluid delivery infrastructure). That is to say that without loss of generality, the invention may be segregated into medical uses and non-medical uses.
The invention will now be described with reference to the Figures, by way of illustration only, in which:
A generalized inverse Navier-Stokes problem for the joint reconstruction and segmentation of noisy velocity images of incompressible flow is described below. To regularize the inverse problem, a Bayesian framework is adopted by assuming Gaussian prior distributions for the unknown model parameters. Although the inverse problem is formulated using variational methods, every iteration of the nonlinear problem is actually equivalent to a Gaussian process in Hilbert spaces. The boundaries of the flow domain are implicitly defined in terms of signed distance functions and use Nitsche's method to weakly enforce the Dirichlet boundary condition on the moving front. The moving of the boundaries is expressed by a convection-diffusion equation for the signed distance function, which allows control of the regularity of the boundary by tuning an artificial diffusion coefficient. The steepest ascent directions of the model parameters are used in conjunction with a quasi-Newton method (BFGS), and it is shown how the posterior Gaussian distribution of a model parameter can be estimated from the reconstructed inverse Hessian.
An algorithm for running on a computer is devised that solves this inverse Navier-Stokes problem and is tested for noisy (SNR=3) 2D synthetic images of Navier-Stokes flows. The algorithm successfully reconstructs the velocity images, infers the most likely boundaries of the flow and estimates their posterior uncertainty. A magnetic resonance velocimetry (MRV) experiment is then designed to obtain images of a 3D axisymmetric Navier-Stokes flow in a converging nozzle.
MRV images are acquired, having poor quality (SNR≅6), intended for reconstruction/segmentation, and images of higher quality (SNR>30) that serve as the ground truth. It is shown that the algorithm performs very well in reconstructing and segmenting the poor MRV images, which were obtained in just 10.2 minutes, and that the reconstruction compares well to the high SNR images, which required a total acquisition time of ≅4.6 hours.
The reconstructed images and the segmented (smoothed) domain are used to estimate the posterior distribution of the wall shear rate and compare it with the ground truth. Since the wall shear rate depends on both the shape and the velocity field, it is seen that the algorithm provides a consistent treatment to this problem by jointly reconstructing and segmenting the flow images, avoiding the design of an additional experiment (e.g. CT, MRA) for the measurement of the geometry, or the use of external (non-physics-informed) segmentation software.
Lastly, an extension of the method is presented which addresses cases where the received signals are sparse (i.e. incomplete in k-space).
The present method has several advantages over general image reconstruction and segmentation algorithms, which do not respect the underlying physics and the boundary conditions and provides additional knowledge of the flow physics (e.g. pressure field), which is otherwise difficult to measure. The method naturally extends to 3D, periodic Navier-Stokes problems in complicated geometries to substantially decrease signal acquisition times and provide additional knowledge of the physical system being imaged.
The generalized inverse Navier-Stokes problem is formulated in the following manner, including the derivation of an algorithm for its solution. In what follows, L2(Ω) denotes the space of square-integrable functions in Ω, with inner product ⋅,⋅
and norm ∥⋅∥L
⋅,⋅
C:=
⋅,C−1⋅
, which generates the norm ∥⋅∥C. The Euclidean norm in the space of real numbers
n is denoted by |⋅|
.
In general, an n-dimensional velocimetry experiment usually provides noisy flow velocity images on a domain I⊂n, depicting the measured flow velocity u* inside an object Ω⊂I that is the boundary is a subset of the overall domain (sometimes called the imaging region), I with boundary ∂Ω=Γ∪Γi∪Γo—that is, a boundary, Γ, delimiting regions of I which do and do not contain the fluid, and inlet and outlet boundaries, Γi, and Γo respectively located at the edges of I. These features can be seen, for example in
An appropriate model for this procedure is the Navier-Stokes problem:
where u is the velocity, p is the reduced pressure calculated as the raw pressure p0 divided by the fluid density, ρ, v is a kinematic velocity, n is a unit normal vector on ∂Ω, and ∂n≡n·∇ is a normal derivative on ∂Ω. Here boundary conditions are needed to describe behaviour on parts of the boundary which exist at the edge of the imaging space (so do not necessarily restrict the fluid flow, but allow flow in and out, for example) are gi for the inlet Γi, and go for the outlet Γo. These are Dirichlet boundary conditions.
The data space is denoted by and the model space by
, and it is assumed that both spaces are subspaces of L2. In the 2D case, u*=(ux*,uy*), and the covariance operator is introduced as:
where σu*
where S:→
is the L2-projection from the model space
to the data space
.
The goal in this invention is to infer the unknown parameters of the Navier-Stokes problem (2.1) such that the model velocity u approximates the noisy measured velocity u* in the covariance-weighted L2-metric defined by . In the general case, the unknown model parameters of (2.1) are the shape of Ω (i.e. exactly where the boundary constraining the fluid is located), the kinematic viscosity v, and the boundary conditions gi, go.
This inverse Navier-Stokes problem leads to the nonlinearly constrained optimization problem:
where uo is the reconstructed velocity field, and x=(gi; go; v). Like most inverse problems, (2.4) is ill-posed and hard to solve. To alleviate the ill-posedness of the problem the search for values of the unknowns (Ω; x) needs restricting to function spaces of sufficient regularity.
Consider
The regularization proceeds as follows. If x∈L2 is an unknown parameter, one way to regularize the inverse problem (2.4) is to search for minimizers of the augmented functional =
+
, where:
is a regularization norm for a given prior assumption ∃α,β>0. This simple idea can be quite effective because by minimizing.
x is forced to lie in a subspace of L2 having higher regularity, namely H1, and as close to the prior value
, is arbitrary.
There is a more intuitive approach that recovers the form of the regularization norm from a probabilistic viewpoint. In the setting of the Hilbert space L2, the Gaussian measure has a probability distribution γ˜
(m,C) and has the property that its finite-dimensional projections are multivariate Gaussian distributions, and it is uniquely defined by its mean m∈L2 and its covariance operator C:L2→L2. This can be seen by noting the mean of a Gaussian measure in L2 is given by:
and the covariance operator and the covariance C:L2×L2→ are defined by:
noting that Cx,x′
=C (x, x′). The above (Bochner) integrals define integration over the function space L2, and under the measure γ, and are well defined due to Fernique's theorem. These integrals can be directly computed by sampling the Gaussian measure with Karhunen-Loeve expansion (as set out below in the discussion of estimation of uncertainty).
It can be shown that there is a natural Hilbert space Hγ that corresponds to γ, and that
H
γ=√{square root over (C)}(L2)
In other words, if x is a random function distributed according to γ, any realization of x lies in Hγ, which is the image of √{square root over (C)}. Furthermore, the corresponding inner product:
is the covariance between x and x′, and the norm ∥x∥C2=x, x
C is the variance of x. Therefore, if x is an unknown parameter for which a priori statistical information is available, and if the Gaussian assumption can be justified, the following form can be chosen:
In this way, =
+
increases as the variance of x increases. Consequently, minimizing
penalizes improbable realizations of the reconstruction.
As mentioned previously, the unknown model parameters of the Navier-Stokes problem (2.1) are the kinematic viscosity v, the boundary conditions gi, go, and the shape of Ω. Since the kinematic viscosity v is considered to be constant, the regularizing norm is simply:
where is a prior guess for v, and σv2 is the variance of v. For the Dirichlet boundary condition, gi∈L2(Γi), it is appropriate to choose the exponential covariance function:
with variance σg and characteristic length
∈
. For zero-Dirichlet (no-slip) or zero-Neumann boundary conditions on ∂Γi equation 2.9 leads to the norm:
Using integration by parts the covariance operator is found to be:
where {tilde over (Δ)} is the L2-extension of the Laplacian Δ that incorporates the boundary condition gi=0 on ∂Γi.
For the natural boundary condition, go∈L2(Γo), the same covariance operator can be used, but equip {tilde over (Δ)} with zero-Neumann boundary conditions, i.e. ∂ng0=0 on ∂Γo. Lastly, for the shape of Ω, which is implicitly represented with a signed distance function ϕ± (defined in more detail below), the following norm is an appropriate choice:
where σϕ and
An additional constraint to be applied to the system is that of matching physical reality. This is achieved by forming an inverse Navier-Stokes problem. This sub-process begins by testing the Navier-Stokes problem (2.1) with functions (v,q)∈H1(Ω)×L2(Ω), and after integrating by parts, the weak form is obtained:
where is the Nitsche penalty term:
which weakly imposes the Dirichlet boundary condition z∈L2(T) on a boundary T, given a penalization constant η. The augmented reconstruction error functional is then defined as:
which contains the regularization terms and the model constraint
, such that u weakly satisfies (2.1). To reconstruct the measured velocity field u* and find the unknowns (Ω,x),
should be minimized. This can be achieved by solving its associated Euler-Lagrange system as set out in the following.
In order to derive the Euler-Lagrange equations for , consider the initial definition:
to be the space of admissible velocity perturbations u′, and ⊂L2(Ω) to be the space of admissible pressure perturbations p′, such that −∂nu′+p′n|Γ
for the first variation of function
with respect to variable y) of terms in the error functional
it is seen that:
Where † denotes the Hermitian adjoint of an operator, and Du represents a generalised derivative operator (with respect to u). Adding together the first variations of M with respect to (u, p) the following is derived:
and after integrating by parts:
The integration by parts formulae for the nonlinear term in the above equation are:
Since does not depend on (u, p), (2.18) and (2.19) can be used to assemble the optimality conditions of
for (u, p):
For (2.20) to hold true for all perturbations (u′, p′)∈×
, it can be deduced that (v,q) must satisfy the following adjoint Navier-Stokes problem:
In this context, v is the adjoint velocity and q is the adjoint pressure, which both vanish when u≡u*. Note also that boundary conditions for the adjoint problem (2.21) are chosen to make the boundary terms of (2.19) vanish, and that these boundary conditions are subject to the choice of U′, which, in turn, depends on the boundary conditions of the (primal) Navier-Stokes problem.
To find the shape derivative of an integral defined in Ω, when the boundary ∂Ω deforms with speed: , Reynold's transport theorem may be used. For the bulk integral of a function ƒ: Ω→
, it is calculated that:
and for the boundary integral of ƒ:
where ƒ′ is the shape derivative of ƒ (due to ), κ is the summed curvature of ∂Ω, and
≡ζn, with ζ∈L2(∂Ω), is the Hadamard parameterization of the speed field. In other words, the boundary deforms (i.e. is perturbed) in a direction perpendicular to the boundary at all points, with a speed ζ. Any boundary that is a subset of ∂I, i.e. the edge of the image (or imaging region) I, is non-deforming (is not perturbed) and therefore the second term of the above integrals vanishes. The only boundary that deforms is Γ∪∂Ω. For brevity,
is used to denote the shape perturbation of an integral I.
Using (2.22) on :
where Du is given by (2.18). Using (2.22) and (2.23) on
, the shape derivatives problem for (u′, p′) is obtained:
which can be used directly to compute the velocity and pressure perturbations for a given speed field . It is observed that (u′, p′)≡0 when
n=∂≡=0. Testing the shape derivatives problem (2.25) with (v,q), and adding the appropriate Nitsche terms for the weakly enforced Dirichlet boundary conditions, the following expression is obtained:
If Ii represents the four integrals in (2.19), integrating by parts (2.26) yields:
and, due to the adjoint problem (2.21):
since u′|Γ is:
which, due to (2.28) and ≡0, takes the form:
where Dζ is the shape gradient. Note that the shape gradient depends on the normal gradient of the (primal) velocity field and the traction, (−v∇v+qI), that the adjoint flow exerts on Γ.
The unknown model parameters {x} (that is the set of parameters {x}) have an explicit effect on and
and can therefore be obtained by taking their first variations. For the Dirichlet-type boundary condition at the inlet this gives:
where {circumflex over (D)}g is the steepest descent direction that corresponds to the covariance-weighted norm. For the natural boundary condition at the outlet:
Lastly, since the kinematic viscosity is considered to be constant within Ω its generalized gradient is
For a given step size ∃τ>0 the steepest descent directions (2.31) to (2.33) can be used either to update an unknown parameter x through:
with sk={circumflex over (D)}x. It is also possible to update the parameters by reconstructing an approximation {tilde over (Δ)} of the inverse Hessian matrix, in the context of a quasi-Newton method, and thereby to compute sk=−{tilde over (H)}{circumflex over (D)}x
. This latter approach is set out in more detail below.
To deform the boundary ∂Ω using the simple update formula (2.34) it is convenient to use a parametric surface representation. Several methods are possible, but here a choice is made to implicitly represent ∂Ω using signed distance functions ϕ±. The object Ω and its boundary ∂Ω are then identified with a particular function ϕ± so that:
A signed distance function ϕ± for can be obtained by solving the Eikonal equation:
One way to solve this problem is with level-set methods (see for example: “Fronts propagating with curvature-dependent speed: Algorithms based on Hamilton-Jacobi formulations” Journal of Computational Physics 79 (1), 12-49; “A fast marching level set method for monotonically advancing fronts” Proceedings of the National Academy of Sciences of the United States of America 93 (4), 1591-1595; “A level set method for inverse problems” Inverse Problems 17 (5), 1327-1355; “A framework for the construction of level set methods for shape optimization and reconstruction” Interfaces and Free Boundaries 5 (3), 301-329; “A Survey in Mathematics for Industry: A survey on level set methods for inverse problems and optimal design” European Journal of Applied Mathematics 16 (2), 263-301; “Combined state and parameter estimation in level-set methods” Journal of Computational Physics 399, 108950, arXiv: 1903.00321).
There is, however, a different approach, which relies on the heat equation (developed in “On the behavior of the fundamental solution of the heat equation with variable coefficients” Communications on Pure and Applied Mathematics 20 (2), 431-455; “Diffusion processes in a small time interval” Communications on Pure and Applied Mathematics 20 (4), 659-685; and “The heat method for distance computation” Communications of the ACM 60 (11), 90-99). The main result that to be drawn from these, in order to justify the use of the heat equation for the approximation of ϕ± states that:
where d (x, ∂Ω) is the Euclidean distance between any point x∈I and ∂Ω, and u is the solution of heat propagation away from an:
This result has been used to implement a smoothed distance function (see e.g. “The heat method for distance computation” Communications of the ACM 60 (11), 90-99), where it was referred to as the “heat method”. Here, this method is slightly adapted to compute signed distance functions ϕ± in truncated domains. To compute ϕ±, therefore, equation (2.37) is solved for τ<<1, and then ϕ± obtained by solving:
with X being the normalized heat flux and ψ being a signed function such that ψ(x) is negative for points x in Ω and positive for points x outside Ω—i.e. to ensure that the extended normal vector points away from the boundary. This intermediate step (the solution of two Poisson problems (2.37)-(2.38) instead of one) is taken to ensure that |∇ϕ±(x)|=1, as required by equation 2.35. To deform the boundary ∂Ω, ϕ± is transported under the speed field ≡∂n. The convection-diffusion problem for ϕ±(x, t) then reads:
where ϕ±0 denotes the signed distance function of the current domain, Ω, ϵϕ:I→
×
is an extension of
:∂Ω→
×
. That is,
extends
from its usual domain only on the boundary to the entire imaging region.
When 2.39 is solved for ϕ±(x, τ) the implicit representation of the perturbed domain is obtained, at time t=τ (the step size), but to do so it is necessary to extend to the whole space of the imaging region I.
To extend to I the normal vector n and the scalar function ζ, which are both initially defined on ∂Ω, are extended across I. The normal vector extension is easily obtained by:
since |∇ϕ±|=1, and an outward-facing extension is given by:
This extended normal vector {dot over (n)}o is then used to extend ζ∈L2(∂Ω) to {dot over (ζ)}∈L2(I), using the convection-diffusion problem:
In other words, ζ is convected along the predefined {dot over (n)}o-streamlines and isotropic diffusion is added for regularization. Note that the streamlines may have complicated behaviour and without diffusion there may be no unique solution to the problem. The choice of ϵϕ
More precisely, the shape regularization depends only on Reζ, and Reϕ
The above scaling approximation describes the dissipation rate of small-scale features such as roughness away from ∂Ω. This is therefore how Reζ, and Reϕ
The extended shape gradient (2.30), after taking into account the regularizing term for ϕ±, is therefore given by:
where {dot over (ζ)} is the extension of the shape gradient ζ=∂nu·(−v∂nv+qn), for x on Γ.
These principles are illustrated in
The shape gradient ζ, which is initially defined on ∂Ω (
The inverse Navier-Stokes problem for the reconstruction and the segmentation of noisy flow velocity images u* can now be written as:
where is given by (2.16). The above optimization problem leads to a Euler-Lagrange system whose optimality conditions have been formulated above.
To precondition the steepest descent directions (2.31)-(2.33) and (2.44), the approximated inverse Hessian {tilde over (H)} of each unknown is reconstructed using the BFGS quasi-Newton method (see Fletcher, R: Practical Methods of Optimization—pub. John Wiley & Sons) with damping. Due to the large scale of the problem, it is most convenient to work with the matrix-vector product representation of {tilde over (H)}. Consequently, the search directions are given by:
and the unknown variables x are updated according to (2.34). The signed distance function ϕ± is perturbed according to (2.39), with ≡−({tilde over (H)}{dot over (ζ)}{circumflex over (D)}{dot over (ζ)}
){dot over (n)}
Each line search is started with a global step size τ=1, and the step size is halved until ((ϕ±,x)k+1)<
((ϕ±,x)k), where k is an indexing parameter which indicates how many iterations of the procedure have been completed. To update the flow field uk to uk+1 the Oseen problem for the updated parameters is solved:
with the boundary conditions given by (2.1). The process terminates when certain predetermined conditions are met. For example, the process may terminate in some cases if the covariance-weighted norm for the perturbations of the model parameters is below a user-specified tolerance. In other cases, the procedure may terminate if the line search fails to reduce on a subsequent iteration.
In algorithmic format, the procedure set out above for reconstruction and segmentation of noisy flow velocity images may be written:
Input: u*, initial guesses for the unknowns (Ω0; x0), regularization parameters controlling the form and weighting of the penalty terms.
Output: reconstruction (u◯, p◯) and inferred model parameters (Ω◯, x◯).
From the above formulation, the reconstructed inverse Hessian {tilde over (H)} can be used to provide estimates for the uncertainties of the model parameters. To simplify the description, let x denote an unknown parameter distributed according to (xk, Ck). The linear approximation to the data u* is given by:
where u=x, and where
is the operator that encodes the linearized Navier-Stokes problem around the solution uk. To solve (2.45), x is updated as follows:
where † is the operator that encodes the adjoint Navier-Stokes problem, and C is the posterior covariance operator. It can be shown that:
where {tilde over (H)}x is the reconstructed inverse Hessian for x. Note that {tilde over (H)} by itself approximates (Cx†Cu−1
+I)−1, and not C, because prior-preconditioned gradients are used (i.e. steepest ascent directions {circumflex over (D)}(⋅)
), instead of the gradients D(⋅)
, in the BFGS formula. Therefore, if
where (λ, ϕ)k is the eigenvalue/eigenvector pair of {tilde over (C)}. The variance of xk+1 can then be directly computed from the samples.
The above boundary value problems can be solved numerically. In this case immersed boundary finite element method is used. In particular, the fictitious domain cut-cell finite element method (FEM), (introduced in “La pénalisation fantôme” Comptes Rendus Mathematique 348 (21-22), 1217-1220 and “Γictitious domain finite element methods using cut elements: II. A stabilized Nitsche method” Applied Numerical Mathematics 62 (4), 328-341) is implemented for the Poisson problem, and extended (in “A Stabilized Nitsche Fictitious Domain Method for the Stokes Problem” Journal of Scientific Computing 61 (3), 604-628, arXiv: 1206.1933; and “A stabilized Nitsche cut finite element method for the Oseen problem” Computer Methods in Applied Mechanics and Engineering 328, 262-300, arXiv: 1611.02895) to the Stokes and the Oseen problems.
h is defined to be a tessellation of I produced by square cells (pixels) K∈, having sides of length h. Also defined is the set of cut-cells
consisting of the cells that are cut by the boundary ∂Ω, and
is the set of cells that are found inside Ω and which remain intact (not cut)—see lower image in
∂Ω/h>>1 where
∂Ω is the smallest length scale of ∂Ω (i.e. that the grid size is such that ∂Ω can be accurately represented with negligible loss of detail).
The discretized space is generated by assigning a bilinear quadrilateral finite element Qf to every cell K. To compute the integrals a standard Gaussian quadrature procedure is used for cells K∈, while for cut-cells
, where integration must be considered only for the intersection K∩
Ω, the approach in “Fast and Accurate Computation of Polyhedral Mass Properties” Journal of Graphics Tools 1 (2), 31-50 may be used, which relies on the divergence theorem and simply replaces the integral over K∩Ω with an integral over ∂(K∩Ω).
The boundary integral on ∂(K∩Ω) is then easily computed using one-dimensional Gaussian quadrature. Since an inf-sup unstable finite element pair (Qf-Qf) is used, a pressure-stabilizing Petrov-Galerkin formulation is also used, including ∇-div stabilization for preconditioning.
To solve the Navier-Stokes problem fixed-point iteration (Oseen linearization) is used and the coupled system is solved using the Schur complement; with an iterative solver (LGMRES) for the outer loops, and a direct sparse solver (UMFPACK) for the inner loops. The immersed FEM solver, and all the necessary numerical operations of the algorithm may be implemented in Python, using its standard libraries for scientific computing, namely SciPy and NumPy. Computationally intensive functions are accelerated using Numba and CuPy.
In this section results are presented of reconstruction and segmentation of noisy flow images by solving the inverse Navier-Stokes problem (2.45) using the algorithms described above. The reconstructed velocity field is then used to estimate the wall shear rate on the reconstructed boundary.
The signal-to-noise ratio (SNR) of the ux* image is defined as:
where σu*x●, and the total relative reconstruction error
by:
Similar measures also apply for the uy* image (and also for the z-dimension when 3D data is considered).
The volumetric flow rate Q, the cross-section area at the inlet A, and the diameter at the inlet D are all defined in setting up the problem. The Reynolds number is based on the reference velocity U=Q/A, and the reference length D.
Testing the algorithm is initially performed on a flow through a symmetric converging channel having a taper ratio of 0.67. To generate synthetic 2D Navier-Stokes data the Navier-Stokes problem (2.1) is solved for a parabolic inlet velocity profile (gi), zero-traction boundary conditions at the outlet (go=0), and Re≅534, in order to obtain the ground truth velocity u●. The synthetic data u* is then generated by corrupting the components of u● with white Gaussian noise such that SNRx=SNRy=3.
This test case only tries to infer Ω and gi. Note that, in the present method, the initial guess x0 of an unknown x equals the mean of its prior distribution ≅1.44%.
The results are presented in ≡{tilde over (H)}{dot over (ζ)}Cϕ± which appears like a slight broadening the reconstructed boundary in this view.
It can be seen that the inverse Navier-Stokes problem performs very well in filtering the noise (u*−Su◯) (see in particular
Having obtained the reconstructed velocity u◯, the wall shear rate γw◯ is computable on the reconstructed boundary ∂Ω◯, which is compared with the ground truth γw● in
Next, the algorithm is tested in a channel that resembles the cross-section of a small abdominal aortic aneurysm, with Dmax/D≅1.5, where Dmax is the maximum diameter at the midsection. Synthetic images are generated for u* as in section 3.1, again for SNRx=SNRy=3, but now for Re=153. The ground truth domain Ω● has horizontal symmetry but the inlet velocity profile deliberately breaks this symmetry. The inverse problem is the same as that in Example 1 but with different input parameters (see Table 1). The initial guess Ω0 is a rectangular domain with height equal to 0:85D, centred in the image domain. For gi0 a skewed parabolic velocity profile is used with a peak velocity of approximately 2 U that fits the inlet of Ω0. The algorithm manages to reconstruct and segment the noisy flow images in thirty-nine iterations, with total reconstruction error E●≅2.87%.
Input Parameters for the Inverse 2D Navier-Stokes Problem with Synthetic Data.
The results are presented in
The discrepancy can be seen to consist mainly of Gaussian white noise (≡{tilde over (H)}{dot over (ζ)}Cϕ± which provides a reasonably wide envelope around ∂Ω◯ towards the centre of the lower part of the boundary. The latter correlations (
As before,
Using the reconstructions u◯ and ∂Ω◯ the wall shear rate is computed and compared with the ground truth in
The flow through a converging nozzle was measured using magnetic resonance velocimetry (MRV). The nozzle converges from an inner diameter of 25 mm to an inner diameter of 13 mm, over a length of 40 mm (
The MRV system comprises radiofrequency probe 6 and a 4.7 T superconducting magnet 8. Within the bore of the magnet 8 a converging nozzle 7 (
The velocity images were acquired on a Bruker Spectrospin DMX200 with a 4.7 T superconducting magnet, which is equipped with a gradient set providing magnetic field gradients of a maximum strength of 13.1 Gcm−1 in three orthogonal directions, and a birdcage radiofrequency coil tuned to a 1H frequency of 199.7 MHz with a diameter and a length of 6.3 cm. To acquire 2D velocity images slice-selective spin-echo imaging was used combined with pulsed gradient spin-echo (PGSE) for motion encoding (see
The flow images acquired have a field of view of 84.2×28.6 mm at 512×128 pixels, giving an in-plane resolution of 165×223 μm. For velocity measurements in the net flow direction, a gradient pulse duration, δ, of 0.3 to 0.5 ms and flow observation times, Δ, of 9 to 12 ms were used. For velocity measurements in the perpendicular to the net flow direction, an increased gradient pulse duration, δ, of 1.0 ms and an increased observation time, Δ, of 25 to 30 ms were used, due to the lower velocity magnitudes in this direction. The amplitude, g, of the flow encoding gradient pulses was set to ±3 Gcm−1 for the direction parallel to the net flow and to +1.5 Gcm−1 for the direction perpendicular to the net flow, in order to maximize phase contrast whilst avoiding velocity aliasing by phase wrapping. To obtain an image for each velocity component, the phase difference between two images acquired with flow encoding gradients having equal magnitude g but opposite signs was taken. To remove any phase shift contributions that are not caused by the flow, the measured phase shift of each voxel was corrected by subtracting the phase shift measured under zero-flow conditions. The gradient stabilization time used is 1 ms and the signal was acquired with a sweep width of 100 KHz. Hard 90° excitation pulses with a duration of 85 μs and a 512 μs Gaussian-shaped soft 180° pulse were used for slice selection and spin-echo refocusing.
It was found that the T1 relaxation time of the glycerol solution was 702 ms, as measured by an inversion recovery pulse sequence. To allow for magnetization recovery between the acquisitions, a repetition time of 1.0 s was used. To eliminate unwanted coherences and common signal artefacts, such as DC offset, a four step phase cycle was used.
To be consistent with the standard definition used in MRI/MRV, the SNR of each MRV image is defined using (3.1), but with μx replaced by the mean signal intensity (images of the 1H spin density) over the nozzle domain (μI), and σu*
The error in the MRV measured velocity is therefore σu=σφ/γgδΔ. To acquire high SNR images (
To verify the quantitative nature of the MRV experiment the volumetric flow rates calculated from the MRV images (using 2D slice-selective velocity imaging in planes normal to the direction of net flow) were compared with the volumetric flow rates measured from the pump outlet. The results agree with an average error of ±1.8%.
The algorithm may also be used to reconstruct and segment the low SNR images (u*) that were acquired during the MRV experiment described above, and they are then compared them with the high SNR images of the same flow (u● in
The subscript ‘x’ is replaced by ‘z’, which denotes the axial component of velocity, and the subscript ‘y’ is replaced by ‘r’, which denotes the radial component of velocity. The low SNR images (SNRz=6.7, SNRr=5.8) required a total scanning time of 5.1 minutes per velocity image (axial and radial components), and the high SNR images (SNRz=44.2, SNRr=34.4) required a total scanning time of 137 minutes per velocity image.
Since the signal intensity of an MRV experiment corresponds to the 1H spin density, the spin density image is segmented using a thresholding algorithm in order to obtain a mask ψ, such that ψ=1 inside Ω (the nozzle) and ψ=0 outside Ω. ψ is considered to be the prior information for the geometry of the nozzle, which also serves as an initial guess for Ω (Ω0). For gi0 a parabolic velocity profile is taken, having a peak velocity of 0.6 U, where U≅5 cm/s is the characteristic velocity for this problem.
In this case the kinematic viscosity is treated as an unknown, with a prior distribution (
The axisymmetric Navier-Stokes problem is:
for unit vectors {circumflex over (z)} and {circumflex over (r)} in the axial and radial directions respectively. The nonlinear term u·∇u retains the same form as in the Cartesian frame.
In order to compare the axisymmetric modelled velocity field with the MRV images, two new operators are introduced: i) the reflection operator :
+×
→
×
and ii) a rigid transformation
:
2×
2. The reconstruction error is then expressed by:
An unknown variable is introduced for the vertical position of the axisymmetry axis by letting u=u(x,y+y0), for y0 being constant. Then, the generalized gradient for y0 is:
and y0 is treated in the same way as the inverse Navier-Stokes problem unknowns {x}.
Using the input parameters of Table 2, the algorithm manages to reconstruct the noisy velocity image and reduce segmentation errors in just six iterations, with total reconstruction error E●≅5.94%
The results for the low SNR MRV images are presented in
As with similar other figures,
It can be seen that the algorithm manages to filter out the noise, the outliers, and the acquisition artefacts of the low SNR MRV images depicting the axial uz* (
Although the kinematic viscosity v is treated as an unknown parameter, the posterior distribution of v remains effectively unchanged. More precisely, a kinematic viscosity is inferred of v◯=3.995×10−6≅ is insensitive to small changes of v (or 1/Re), and, as a result, the prior term in the gradient of v (equation (2.33)) dominates; i.e. the model
is not informative. Physically, it is not possible to infer v (with reasonable certainty) for this particular flow without additional information on pressure.
Certainly, it is possible to smooth the boundary ∂Ω● using conventional image processing algorithms. However, the velocity field u● will not be consistent with the new smoothed boundary (the no-slip boundary condition will not be satisfied). The method proposed here for the reconstruction and segmentation of MRV images tackles exactly this problem: it infers the most likely shape of the boundary (∂Ω◯) from the velocity field itself, without requiring an additional experiment (e.g. CT, MRA) or manual segmentation using another software.
Furthermore, in this Bayesian setting it is possible to use the 1H spin density to introduce a priori knowledge of ∂Ω in the form of a prior, which would prove useful in areas of low velocity magnitudes where the velocity field itself does not provide enough information in order to segment the boundaries. As a result, the algorithm performs very well in estimating the posterior distribution of wall shear rate, a quantity which depends both on the velocity field and the boundary shape, and which is hard to measure otherwise.
Regularization is crucial in order to successfully reconstruct the velocity field and segment the geometry of the nozzle in the presence of noise, artefacts, and outliers. Regularization comes from the Navier-Stokes problems (primal and adjoint) () and the regularization of the model parameters (
). For example, overfitting of the shape ∂Ω is avoided by choosing the Reynolds numbers for the geometric flow to be Reϕ
In summary, the procedure described above may be thought of as a series of process steps, as captured in the flow chart 100 depicted in
Next, at step 102, initial values for a set of unknown parameters, {x}, are received. The set of unknown parameters includes at least a shape of the boundary, ∂Ω, within which the fluid is located; and a kinematic viscosity of the fluid, v. Optionally, the initial values for parameters may also include an inlet boundary condition for the fluid, gi where an inlet portion exists; and/or an outlet boundary condition for the fluid, go where an outlet portion exists;
At step 103 a model fluid velocity field, uo, is calculated, using the initial values as inputs to a Navier-Stokes problem. This problem is then solved for the velocity field within and at the boundary.
In order to optimise the input parameters, an error functional, , is constructed in step 104. This error functional has the form:
=
+
+
in which is a penalty term which penalises deviations in uo from the received MRV data;
is a penalty term which penalises outcomes for in uo which arise from statistically unlikely values for the parameters in the set of unknown parameters, {x}, and
is a penalty term which penalises deviations in uo from the Navier Stokes problem.
At step 105, a generalised gradient of with respect to each of the unknown parameters in the set {x} is determined. These generalised gradients are then used in step 106 to determine a direction in which to perturb each parameter, by evaluating each generalised gradient at the initial value of its respective parameter. The generalised gradients allow the determination of a direction of perturbation of that parameter which will lead to a reduction in the magnitude of
The initial value for each parameter is then updated to an improved parameter by taking a step in the direction determined by the evaluation of the generalised gradient.
From this point, an improved velocity field can be extracted. At step 107, the MRV data is reconstructed by outputting the improved values from step 106 and calculating the reconstructed velocity, u, across at least the region Ω, based on the improved values. As noted above, the procedure may be an iterative one in the sense that some of the steps may be repeated multiple times to converge on the optimal set of values such that the parameters minimise as best they can.
Reconstruction of Sparse k-Space Magnetic Resonance Velocimetry Data
Moving on to an extension of the methods set out above, in which the MRV data is supplied as sparse MRV data. It has been realised by the inventors that the Navier-Stokes equation contains much more information than purely mathematical variational methods. This permits reconstruction and/or segmentation of even sparser signals.
MRV data may be supplied in a sparse format for many reasons, but the primary motivation for developing an algorithm to handle sparse MRV data is to allow a reduction in MRV signal acquisition time and thereby speed up the overall time to acquire high quality, low noise images.
The discussion above assumes that the MRV samples have already been processed to extract the flow (i.e. velocity field) information. This is typically performed by acquiring signals, s, in frequency space (sometimes referred to as k-space), and taking the inverse Fourier transform of s, −1 s. The magnitude (|
−1 s|) and phase (arg(
−1 s)) of the result are then used to extract information on the density and velocity of fluids in the imaging region.
To extract velocity at least 4 signals (4 individual scans) are required, each signal acquired with a specific magnetic gradient pulse. The velocity image along a direction is then recovered by combining 4 phase images, i.e. arg(−1 si) for i=1, 2, 2 4.
Consequently, to measure a 2D velocity field at least 8 scans are needed, and for a 3D velocity field at least 12 scans are required. When k-space is fully-sampled there is a unique correspondence between the signal s and the measured velocity field u*, then the methodology presented above is generally preferable because it is simpler. However, when k-space is sparsely-sampled there is not a unique correspondence between s and u*, because s is not measured at every point of the frequency space (there are gaps in k-space). One option is to fill these gaps with zeros, and then take the inverse Fourier transform, but this introduces artefacts and produces images of low quality.
A comparison between zero-filling and reconstruction is shown schematically in
In order to obtain reconstructed images of similar quality to the method set out above, the functional to be minimised is changed to the following form:
in which the and
terms have the same form as set out above, the
term is a velocity regularisation term, which penalises non-compliance between the measured data uk* and the modelled data uk, and has the form:
where S is the projection from model space to data space, k is an index indicating the physical direction of velocity which is being regularised, d is the number of physical dimensions in which reconstruction is to occur, and Cu
Similarly, the ′ error term helps to ensure consistency in between the k-space sampling and the modelled data, and has the form:
where n is the number of samples (and is less than the dimension of frequency space, m in which samples are taken, which defines the “sparse sampling” condition, in other words n<<m), j is an index representing each data set, d is the number of physical dimensions in which reconstruction is to occur,
represents a Euclidean norm in the space of complex numbers, , representing a multivariate normal distribution, and in which σ is the variance of that distribution,
is a sparse sampling operator mapping reconstructed data back to the sparse raw data space,
is the Fourier transform operator, ρj is the reconstructed magnitude of the jth data set, e is Euler's constant, i=√{square root over (−1)} and φj is the reconstructed phase of the jth data set.
Finally, the regularisation term is a function of two level segmentation constants αj, βj for the Chan-Vese algorithm, corresponding to a given magnitude image, ρj, in other words
=
(αj,βj,ρj).
In the above, uk* is the measured velocity in direction, k, given from the phase information from each of 12 scans (in general, for 3D information), as follows:
for known constants cj. Clearly, where only 1 or 2 dimensions are required, only the first or second lines need be considered and only 4 or 8 scans are required.
The two additional terms ′ and
introduce 4×4 additional unknown parameters to solve for each physical dimension being imaged: αj, βj, ρj, σj for each of four scans (j) per dimension.
In further examples, prior information from the nuclear spin density (magnitude signal) is incorporated into the procedure to derive an improved boundary shape/location. To exploit information about ∂Ω from the nuclear spin density images an energy-based segmentation functional , is used, which assumes that the image consists of two regions with approximately uniform magnitudes. The functional is given by:
where the mean value of the average magnitude inside Ω is α∈ and outside Ω is β∈
. In addition, σρ
denotes the (modified) Heaviside function such that
(t)=1 if t<0 and is equal to 0 otherwise. Taking into account the prior contribution, the combined functional is given by:
Therefore, to reduce signal acquisition time further, we can obtain more accurate spin density (magnitude) images separately (without encoding velocity) and use them as priors for an ∂Ω(φ±). Similarly, we can obtain d−1 dimensional scans for the inlet velocity boundary condition, and use them as priors for
An algorithm to capture the general methods set out herein can be written as follows:
In line with the above images (see e.g.
By contrast, the application of the full reconstruction method set out above results in
This illustrates the power in using a robust physical and statistical model to infer the parameters from noisy and incomplete input data, since non-physical features of the velocity images are removed, providing noiseless velocity fields that satisfy the fluid flow (Navier-Stokes) equations and the no-slip condition on the boundary (walls).
In summary, the procedure described above may be thought of as a series of process steps, as captured in the flow chart 200 depicted in
At step 202, initial values for a set of unknown parameters, {x}, are received, which include at least: a shape of the boundary, ∂Ω, within which the fluid is located and a kinematic viscosity of the fluid, v. Optionally an inlet boundary condition for the fluid, gi is also supplied, where an inlet portion exists, and/or an outlet boundary condition for the fluid, go where an outlet portion exists may be supplied. Initial values for the density and phase of the MRV data and for the two parameters for the two-level Chan-Vese segmentation algorithm may also be supplied or extracted from the MRV data, for use in the procedure at this stage.
At step 203, a model fluid velocity field, uo, is calculated using the initial values as inputs to a Navier-Stokes problem, which is then solved for the velocity field within and at the boundary and calculating the measured velocity field, u*, by performing an inverse Fourier transform on each sj, −1 s and deriving u* from the arguments of
−1 sj;
At step 204, an error functional, , is constructed, of the form:
in which is a penalty term which penalises deviations in uo from the received MRV data;
h is a penalty term which penalises outcomes for in uo which arise from statistically unlikely values for the parameters in the set of unknown parameters, {x},
is a penalty term which penalises deviations in uo from the Navier Stokes problem,
is a segmentation term for partitioning the raw data and in which
is a penalty term which penalises deviations in uo from the calculated measured velocity field u*;
may have the form:
where S is the projection from model space to data space, k is an index indicating the physical direction of velocity which is being regularised, d is the number of physical dimensions in which reconstruction is to occur, and Cu
Similarly, the penalty term may have the form:
where n is the number of samples (and is less than the dimension of frequency space in which samples are taken, which defines the “sparse sampling” condition) j is an index representing each data set, d is the number of physical dimensions in which reconstruction is to occur,
represents a Euclidean norm in the space of complex numbers representing a multivariate normal distribution, and in which σ is the variance of that distribution, is a sparse sampling operator mapping reconstructed data back to the sparse raw data space,
is the Fourier transform operator, ρj is the reconstructed magnitude of the jth data set, e is Euler's constant, i=√{square root over (−1)} and φj is the reconstructed phase of the jth data set. Step 205 includes determining a generalised gradient of
with respect to each of the unknown parameters in the set {x}.
At step 206, for each of the unknown parameters in the set {x}, using the generalised gradient for that parameter, evaluated at the initial value of all of the parameters, a direction in which perturbing each parameter reduces the magnitude of is determined. From this, improved values for each of the unknown parameters in the set {x} can be determined.
Finally at step 207, the MRV data are reconstructed by outputting the improved values from step 206 and calculating the reconstructed velocity, u, across at least the region Ω and reconstructing a phase and magnitude of the received raw data from the reconstructed velocity field, u.
The sampling density can be further decreased (for fixed reconstruction error) if the a priori information on {x} becomes more accurate. The present application also discloses the finding that when the Navier-Stokes problem is an accurate and descriptive model, only the boundary ∂Ω, the boundary conditions gi, go, and the kinematic viscosity are needed in order to find the velocity field. The interplay between the full flow field and these parameters is illustrated in
This approach to physics-informed compressed sensing extends the standard notion of sparsity used in conventional compressed sensing methods to a more general notion of a structure, which is dictated by the N-S problem. Instead of enforcing sparsity during the nonlinear optimization process, a sparse (hidden) structure of the velocity field is recovered by enforcing the Navier-Stokes problem as a constraint. This is what
This approach leads to the compression and decompression algorithms described herein and illustrated generally in
At step 302, the method proceeds to extract from the data a modelled flow field which includes the set of parameters {x}, including the shape of the boundary, ∂Ω, enclosing the region Ω, the kinematic viscosity, v, of the fluid and optionally the inlet and outlet boundary conditions, gi, go, respectively where an inlet and or outlet is present in the system being modelled.
The compressed form of the modelled flow field is then provided in step 303 in which the compressed form is represented by the shape of the boundary, the kinematic viscosity, and the inlet and/or outlet boundary conditions where an inlet and/or outlet exists respectively. This form of the modelled flow field requires much less storage capacity and/or transmission bandwidth to store or transmit. In particular the full flow field may be discarded once the compressed form has been provided, and the compressed version may be locally stored for later use, or transmitted elsewhere.
Turning now to
At step 352, the parameters {x} are supplied as inputs to a Navier-Stokes problem. Finally at step 353, the Navier Stokes problem is solved, thereby calculating the modelled flow field consistent with the inputs (i.e. parameters {x}) and the Navier-Stokes equations.
Note that the decompression protocols in
Another way to further decrease the sampling density is briefly discussed now. In the examples above, the phase space is only sparsely sampled. In order to provide a good test of the theory, fully sampled k-space signals are used to provide a baseline for comparison. Then the fully sampled signals are passed through a sparse sampling filter to restrict the received information and simulate the effect of an incomplete signal being received.
There is a choice in the design of the sparse sampling filter, and the results in applying each can be compared to one another. These results may then be used to design sparse sampling patterns which provide the best results for a given sampling density. For example, the inventors have tested two probabilistic sampling patterns in k-space, a 1D and 2D Gaussian distribution.
Using this approach it has been found that the 2D Gaussian distribution sampled at 15% coverage of the k-space region produces slightly better results than the 1D Gaussian distribution generated at 25% coverage. Note that the 2D case adequately covers the centre of k-space, while the 1D case leaves gaps. This can be corrected by fully-sampling the central region of k-space and then sparsely-sampling regions that are not as important. To identify the important regions, the algorithm set out herein can be used to backpropagate the errors from the velocity images to the k-space. In general, this approach can be used to identify important k-space regions and tailor the raw data acquisition to capture information in those regions, leaving the less important regions with only sparse k-space information.
Any of the methods set out herein may be implemented on a computer. As shown in
The processor 402 is provided for performing the method steps set out in detail above. The processor 402 may for example have software, firmware, application specific integrated circuits, etc. which contain instructions which, when run on the processor 402, cause the method steps to be executed. In some cases, the software may be provided separately on a physical or other tangible medium, for running on a general purpose computer, to provide the method set out above, and derive the advantage of fast processing of MRV data to achieve high quality, low noise MRV data.
Dashed lines indicate optional features. In this case, the output module 408 may feed into either the receiving module 406 or the processor 402, in order to provide the iterative improvements to the model parameters discussed above.
Also optional is the presence of a magnetic resonance imaging (MRI) or magnetic resonance velocimetry (MRV) system 410. This MRI/MRV imaging system 410 may feed data directly into the processor 402, or indirectly into the process via input module 404 or receiving module 406. In any event, the MRV data is supplied to the process from the MRI/MRV imaging system 410 in either a raw (possibly incomplete) k-space format—see Example 4, or in the form of a raw (unprocessed) velocity field—see Example 3.
Note that although shown as separate modules, more than one of the modules may form part of a single entity capable of fulfilling multiple roles simultaneously. As a particular example of this, the MRI/MRV imaging system may include the processor 402, input module 404, receiving module 406, and/or output module 408. That is to say the invention extends to an MRI/MRV imaging system which includes the processing capabilities discussed above to reconstruct the raw image data and thereby provide a single system capable of outputting high quality, low-noise MRV image data in accordance with the present disclosure.
Number | Date | Country | Kind |
---|---|---|---|
2111141.4 | Aug 2021 | GB | national |
2207201.1 | May 2022 | GB | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/GB2022/052027 | 8/2/2022 | WO |