METHODS AND SYSTEMS FOR IMPROVED RECONSTRUCTION OF MAGNETIC RESONANCE VELOCIMETRY DATA AND A METHOD OF COMPRESSING FLOW DATA OF A FLUID

Information

  • Patent Application
  • 20250004084
  • Publication Number
    20250004084
  • Date Filed
    August 02, 2022
    2 years ago
  • Date Published
    January 02, 2025
    2 months ago
Abstract
The application relates to methods and systems for reconstructing flow data from Magnetic Resonance Velocimetry data. Reconstruction occurs based on physical considerations of the system, and in particular leverages the Navier-Stokes Equations to provide physically realistic outputs. The application also discloses systems and methods for compressing and decompressing measured flow data of a fluid. The method begins by receiving raw data encoding information in an imaging region I relating to a fluid velocity field. u*, within a subset of the imaging region I forming a region, Ω, enclosed by a boundary, δΩ. The boundary includes at least a physical boundary portion, Γ. From this data, a modelled flow field over the region Ω is extracted and a shape of the boundary δΩ is derived along with a kinematic viscosity, v, of the fluid. The modelled flow field is compressed by representing the flow field with the shape of the boundary, the kinematic viscosity, and inlet and/or outlet boundary conditions where an inlet and/or outlet exists.
Description
FIELD OF THE INVENTION

The present invention relates to processing of Magnetic Resonance Velocimetry (MRV) data, including for example to reconstruction of noisy or incomplete data. In addition, techniques described herein relate to compression and decompression of flow fields extracted from such data.


BACKGROUND

Experimental measurements of fluid flows inside or around an object often produce velocity images which contain noise. For example, magnetic resonance velocimetry (MRV) (Fukushima 1999; Mantle & Sederman 2003; Elkins & Alley 2007) can measure all three components of a time varying velocity field as set out in “Nuclear magnetic resonance as a tool to study flow” Annual Review of Fluid Mechanics 31, 95-123 (1999), “Dynamic MRI in chemical process and reaction engineering”. Progress in Nuclear Magnetic Resonance Spectroscopy 43 (1), 3-60 (2003), and “Magnetic resonance velocimetry: applications of magnetic resonance imaging in the measurement of fluid motion” Experiments in Fluids 43 (6), 823-858 (2007). However, the measurements become increasingly noisy as the spatial resolution is increased.


To achieve an image of acceptable signal to noise ratio (SNR), repeated scans are often required, leading to long signal acquisition times. To address that problem, fast acquisition protocols (pulse sequences) can be used, but these may be difficult to implement and can lead to artefacts depending on the magnetic relaxation properties and the magnetic field homogeneity of the system studied. Another way to accelerate signal acquisition is by using sparse sampling techniques in conjunction with a reconstruction algorithm. The latter approach is an active field of research, commonly referred to as compressed sensing as discussed in “Compressed sensing” IEEE Transactions on Information Theory 52 (4), 1289-1306 (2006), “Sparse MRI: The application of compressed sensing for rapid MR imaging” Magnetic Resonance in Medicine 58 (6), 1182-1195 (2007), “Phase reconstruction from velocity-encoded MRI measurements—A survey of sparsity-promoting variational approaches” Journal of Magnetic Resonance 238, 26-43 (2014), “Joint phase reconstruction and magnitude segmentation from velocity-encoded MRI data” pp. 1-22, arXiv: 1908.05285 (2019).


Compressed sensing algorithms exploit a priori knowledge about the structure of the data, which is encoded in a regularization norm (e.g. total variation, wavelet bases), but without considering the physics of the problem.


For images depicting fluid flow, a priori knowledge can come in the form of a Navier-Stokes problem. The problem of reconstructing and segmenting a flow image then can be expressed as a generalized inverse Navier-Stokes problem whose flow domain, boundary conditions, and model parameters have to be inferred in order for the modelled velocity to approximate the measured velocity in an appropriate metric space. This approach not only produces a reconstruction that, by construction, is an accurate fluid flow inside or around the object (a solution to a Navier-Stokes problem), but also provides additional physical knowledge (e.g. pressure), which is otherwise difficult to measure.


Inverse Navier-Stokes problems have been intensively studied during the last decade, mainly enabled by the increase of available computing power. Recent applications in fluid mechanics range from the forcing inference problem (discussed in “Determining white noise forcing from Eulerian observations in the Navier-Stokes equation” Stochastics and Partial Differential Equations: Analysis and Computations 2 (2), 233-261 (2014)), to the reconstruction of scalar image velocimetry (SIV) (see “A space-time integral minimisation method for the reconstruction of velocity fields from measured scalar fields” Journal of Fluid Mechanics 854, 348-366 (2018); “Analytic reconstruction of a two-dimensional velocity field from an observed diffusive scalar” Journal of Fluid Mechanics 871, 755-774, arXiv: 1904.04919 (2019)) and particle image velocimetry (PIV) (“Data assimilation method to de-noise and de-filter particle image velocimetry data” Journal of Fluid Mechanics 877, 196-213 (2019)) signals, and the identification of optimal sensor arrangements (“Optimal sensor placement for variational data assimilation of unsteady flows past a rotationally oscillating cylinder” Journal of Fluid Mechanics 823, 230-277 (2017); “Optimal sensor placement for artificial swimmers” Journal of Fluid Mechanics 884, arXiv: 1906.07585 (2019)). Regularization methods that can be used for model parameters are reviewed in “Inverse problems: A Bayesian perspective” Acta Numerica 19 451-459. (2010) from a Bayesian perspective and in “Modern regularization methods for inverse problems” Acta Numerica 27, 1-111, arXiv: 1801.09922 (2018) from a variational perspective. The well-posedness of Bayesian inverse Navier-Stokes problems is addressed in “Bayesian inverse problems for functions and applications to fluid mechanics” Inverse Problems 25 (11) (2009).


Recently, “Boundary control in computational haemodynamics” Journal of Fluid Mechanics 847, 329-364 (2018) treats the reduced inverse Navier-Stokes problem of finding only the Dirichlet boundary condition for the inlet velocity that matches the modelled velocity field to MRV data for a steady 3D flow in a glass replica of the human aorta. The authors measure the model-data discrepancy using the L2-norm and introduce additional variational regularization terms for the Dirichlet boundary condition. The same formulation is extended to periodic flows in “Fourier Spectral Dynamic Data Assimilation: Interlacing CFD with 4D Flow MRI.” Tech. Rep. Research Report No. 2019-56, ETH Zurich. (2019) and in “Harmonic balance techniques in cardiovascular fluid mechanics” In Medical Image Computing and Computer Assisted Intervention—MICCAI 2019, pp. 486-494. Cham: Springer International Publishing (2019), which uses the harmonic balance method for the temporal discretization of the Navier-Stokes problem. “Variational data assimilation for transient blood flow simulations: Cerebral aneurysms as an illustrative example” International Journal for Numerical Methods in Biomedical Engineering 35 (1), 1-27 (2019) addresses the problem of inferring both the inlet velocity (Dirichlet) boundary condition and the initial condition, for unsteady blood flows and 4D MRV data, with applications to cerebral aneurysms.


The above studies consider rigid boundaries and require a priori an accurate, and time-averaged, geometric representation of the blood vessel or other boundary containing the fluid. To find the shape of the flow domain, e.g. the blood vessel boundaries, computed tomography (CT) or magnetic resonance angiography (MRA) is often used. The acquired image is then reconstructed, segmented, and smoothed. This process not only requires substantial effort and the design of an additional experiment (e.g. CT, MRA), but it also introduces geometric uncertainties (“Computational fluid dynamics modelling in cardiovascular medicine” Heart 102 (1), 18-28 (2016); “Uncertainty quantification in coronary blood flow simulations: Impact of geometry, boundary conditions and blood viscosity” Journal of Biomechanics 49 (12), 2540-2547 (2016)), which, in turn, affect the predictive confidence of arterial wall shear stress distributions and their mappings (“Wall Shear Stress: Theoretical Considerations and Methods of Measurement” Progress in Cardiovascular Diseases 49 (5), 307-329 (2007)).


For example, “Variational data assimilation for transient blood flow simulations: Cerebral aneurysms as an illustrative example” International Journal for Numerical Methods in Biomedical Engineering 35 (1), 1-27 (2019) reports discrepancies between the modelled and the measured velocity fields near the flow boundaries, which the authors suspect are caused by geometric errors that were introduced during the segmentation process. In general, the assumption of rigid boundaries either implies that a time-averaged geometry has to be used, or that an additional experiment (e.g. CT, MRA) has to be conducted to register the moving boundaries to the flow measurements.


This application addresses the problem of simultaneous reconstruction and segmentation of velocity fields, to attempt to overcome some or all of the drawbacks of the disclosures discussed above.


SUMMARY OF THE INVENTION

The present application provides a more consistent approach to the problem by treating the blood vessel geometry as an unknown when solving a generalized inverse Navier-Stokes problem. In this way, the inverse Navier-Stokes problem simultaneously reconstructs and segments the velocity fields and can better adapt to the MRV experiment by correcting the geometric errors and improving the reconstruction.


In this application the problem of simultaneous reconstruction and segmentation of velocity fields is addressed by formulating a generalized inverse Navier-Stokes problem, whose flow domain, boundary conditions, and model parameters are all considered unknown. To regularize the problem, a Bayesian framework and Gaussian measures in Hilbert spaces are used. This further allows estimation of the posterior Gaussian distributions of the unknowns using a quasi-Newton method, and thereby to estimate the uncertainties of the unknowns by approximating their posterior covariance with the quasi-Newton method, which has not yet been addressed for this type of problem. An algorithm (or method) for the solution of this generalized inverse Navier-Stokes problem is provided. Detailed demonstrations on synthetic images of 2D steady flows and real MRV images of a steady axisymmetric flow are provided to highlight the quality of the reconstruction.


The methods described herein formulate and solve a generalized inverse Navier-Stokes problem for the joint reconstruction and segmentation of noisy flow velocity images. This amounts to a reduction of the total scanning time. In some examples shown below, there is a reduction in scanning time by a factor of 27, for example. At the same time, the method provides additional knowledge about the physics of the flow (e.g. pressure), and addresses the shortcomings of MRV (low spatial resolution and partial volume effect) that hinder the accurate estimation of wall shear stresses using existing methods. The images derived by the methods disclosed herein may be further post-processed in order to either reveal obscured flow patterns or to extract a quantity of interest (e.g. pressure, wall shear stress, etc.).


Disclosed herein is a computer implemented method for reconstruction of magnetic resonance velocimetry, MRV, data, the method comprising the steps of: (a) receiving MRV data encoding information relating to a fluid velocity field, u*, within a subset of the MRV data forming a region, Ω, enclosed by a boundary, ∂Ω, the boundary including at least a physical boundary portion, Γ, and optionally further including an inlet portion, Γi, and/or an outlet portion, Γo; and wherein the region Ω is a subset of an imaging region, I, over which the MRV data encodes information; (b) receiving initial values for a set of unknown parameters, {x}, including at least: a shape of the boundary, ∂Ω, within which the fluid is located; a kinematic viscosity of the fluid, v, and optionally an inlet boundary condition for the fluid, gi where an inlet portion exists; and/or an outlet boundary condition for the fluid, go where an outlet portion exists; (c) calculating a model fluid velocity field, uo, using the initial values as inputs to a Navier-Stokes problem, and solving for the velocity field within and at the boundary; (d) constructing an error functional, custom-character of the form:






=

+
+





in which custom-character is a penalty term which penalises deviations in uo from the received MRV data; custom-character is a penalty term which penalises outcomes for in uo which arise from statistically unlikely values for the parameters in the set of unknown parameters, {x}, and custom-character is a penalty term which penalises deviations in uo from the Navier Stokes problem; (e) determining a generalised gradient of custom-character with respect to each of the unknown parameters in the set {x}; (f) for each of the unknown parameters in the set {x}, using the generalised gradient for that parameter, evaluated at the initial value of all of the parameters, to determine a direction in which perturbing each parameter reduces the magnitude of custom-character; and thereby determining improved values for each of the unknown parameters in the set {x}; and (g) reconstructing the MRV data by outputting the improved values from step (f) and calculating the reconstructed velocity, u, across at least the region Ω.


The MRV data may be supplied “live”, i.e. from a measurement taken just prior to the method being enacted on that data, or the method may be performed on MRV data taken at an earlier time, at a different location, etc. Receiving initial values can take many forms. For example, they may be supplied by a user (e.g. in the form of an estimate or best guess), they may be derived from the data by calculation. For example estimates for the boundary location and inlet boundary condition can be obtained from the MRV data itself. The method (sometimes referred to as the algorithm) may include features which automatically estimate the boundary location and/or inlet boundary condition from the MRV data and propose a suitable initial value, or a user may use the data to extract the initial values manually.


“Values” is used here in the most general sense, and includes e.g. functions for boundary conditions and topographical information relating to the location and/or shape of the boundary. In general, “values” refers to a set of values defined over a domain (e.g. along a boundary, or portion of boundary, across the entire imaging area, etc.). Improved values means values which are improved relative to the initial guess. “Improved” in this sense means values for the parameters {x} which result in a lower magnitude of custom-character, relative to the magnitude of custom-character calculated with the initial values for the parameters.


In some cases (e.g. 2D imaging of a tube viewed with its axis into/out of the imaging plane, modelling of a complete 3D system), there may not be an inlet or an outlet, and it is in this sense that the inclusion of inlet and/or outlet boundary conditions is optional.


The above procedure uses physics considerations (via the Navier-Stokes equations) and statistical likelihood to reduce the space of suitable reconstructions. In general the direction of change of the unknown parameters which causes a reduction in custom-character is calculated independently for each parameter, and the collective result of changing all the parameters in their respective direction is assessed by considering whether the collective result also reduces the magnitude of custom-character. Where a full global inverse Hessian reconstruction is possible to calculate, the directions of change of each parameter could be chosen to guarantee that the collective effect of all changes in parameters is to reduce the magnitude of custom-character.


It is an interesting and unexpected effect that starting with less information (that is by also treating the boundary location as unknown and allowing this to be optimised along with the other unknown parameters) allows for improved image reconstruction. This is important because MRV data is time consuming to obtain and obtaining low noise data is yet more time consuming as many scans must be made and averaged. By providing an efficient and reliable reconstruction algorithm, the overall time to acquire low-noise MRV data can be reduced substantially.


Optionally, the MRV data is raw data in the form of a series of incomplete frequency space data sets, sj; wherein: the method includes the step of calculating the measured velocity field, u*, by performing an inverse Fourier transform on each sj, F−1 s and deriving u* from the arguments of F−1 sj; the construction of the error functional modifies the custom-character penalty term to:






=


1

2

n


σ
2








j
=
1


4

d








s
j

-


(


ρ
j



e

i


φ
j




)








2

(


n

)

2







in which j is an index representing each data set, d is the number of physical dimensions in which reconstruction is to occur,







1

2

n

σ







...






2

(


n

)






represents a Euclidean norm in the space of complex numbers (custom-character) representing a multivariate normal distribution, and in which σ is the variance of that distribution, custom-character is a sparse sampling operator mapping reconstructed data back to the sparse raw data space, custom-character is the Fourier transform operator, ρj is the reconstructed magnitude of the jth data set, e is Euler's constant, i=√{square root over (−1)} and φj is the reconstructed phase of the jth data set; the error functional, custom-character, has the form:






=

+
+
+
+





in which custom-character is a segmentation term for partitioning the raw data and in which the additional term custom-character has the form:






=


1
2






k
=
1

d







u
k
*

-

S


u
k






C

u
k


2







where S is the projection from model space to data space, k is an index indicating the physical direction of velocity which is being regularised, d is the number of physical dimensions in which reconstruction is to occur, and Cuk is an appropriate covariance operator for generating a norm in a covariance-weighted L2 space (denoted ∥ . . . ∥C for a covariance operator C). The effect of the covariance operator is to introduce Gaussian noise into this penalty term.


This allows a further reduction in time to acquire low noise reliable MRV images, since the MRV data to be processed by the method need not even be fully sampled in k-space because the method can be extended to reconstruct the missing k-space information as well as to derive physically and statistically realistic parameters for the unknown parameters.


Raw data is taken in four scans at different magnetic field gradients per spatial dimension, which is required to extract the phase information (which in turn allows extraction of velocimetry information). Raw data in this example therefore comprises four (incomplete) scans in frequency space per spatial dimension being imaged, i.e. 4, 8, or 12 scans for 1D, 2D, or 3D imaging.


In some cases, this k-space reconstruction may be thought of as a computer implemented method for reconstruction of magnetic resonance velocimetry, MRV, data, the method comprising the steps of: (a) receiving raw MRV data in the form of a series of incomplete frequency space data sets, sj, encoding information relating to a fluid velocity field, u*, within a subset of the MRV data forming a region, Ω, enclosed by a boundary, ∂Ω, the boundary including at least a physical boundary portion, Γ, and optionally further including an inlet portion, Γi, and/or an outlet portion, Γo; and wherein the region Ω is a subset of an imaging region, I, over which the MRV data encodes information; (b) receiving initial values for a set of unknown parameters, {x}, including at least: a shape of the boundary, ∂Ω, within which the fluid is located; a kinematic viscosity of the fluid, v, and optionally an inlet boundary condition for the fluid, gi where an inlet portion exists; and/or an outlet boundary condition for the fluid, go where an outlet portion exists; (c) calculating a model fluid velocity field, uo, using the initial values as inputs to a Navier-Stokes problem, and solving for the velocity field within and at the boundary and calculating the measured velocity field, u*, by performing an inverse Fourier transform on each sj, custom-characters and deriving u* from the arguments of custom-charactersj; (d) constructing an error functional, custom-character, of the form:






=

+
+
+
+





in which custom-character is a penalty term which penalises deviations in uo from the received MRV data; custom-character is a penalty term which penalises outcomes for in uo which arise from statistically unlikely values for the parameters in the set of unknown parameters, {x}, custom-character is a penalty term which penalises deviations in uo from the Navier Stokes problem, custom-character is a segmentation term for partitioning the raw data and in which custom-character it is a penalty term which penalises deviations in uo from the calculated measured velocity field u*; (e) determining a generalised gradient of custom-character with respect to each of the unknown parameters in the set {x}; (f) for each of the unknown parameters in the set {x}, using the generalised gradient for that parameter, evaluated at the initial value of all of the parameters, to determine a direction in which perturbing each parameter reduces the magnitude of custom-character; and thereby determining improved values for each of the unknown parameters in the set {x}; and (g) reconstructing the MRV data by outputting the improved values from step (f) and calculating the reconstructed velocity, u, across at least the region Ω and reconstructing a phase and magnitude of the received raw data from the reconstructed velocity field, u.


That is, in this example u* is not an input, but the process begins at a step prior to u* being derived from the raw data (this being derived by inverse Fourier transform).


Optionally, custom-character has the form:






=


1
2






k
=
1

d







u
k
*

-

S


u
k






C

u
k


2







where S is the projection from model space to data space in which, k is an index indicating the physical direction of velocity which is being regularised, d is the number of physical dimensions in which reconstruction is to occur, and Cuk is an appropriate covariance operator for generating a norm in a covariance-weighted L2 space (denoted ∥ . . . ∥C for a covariance operator C). The effect of the covariance operator is to introduce Gaussian noise into this penalty term.


Optionally, the custom-character penalty term has the form:






=


1

2

n


σ
2










j
=
1


4

d







s
j

-







where n is the number of samples (and is less than the dimension of frequency space in which samples are taken, which defines the “sparse sampling” condition) j is an index representing each data set, d is the number of physical dimensions in which reconstruction is to occur,







1

2

n

σ







...






2

(


n

)






represents a Euclidean norm in the space of complex numbers representing a multivariate normal distribution, and in which σ is the variance of that distribution, custom-character is a sparse sampling operator mapping reconstructed data back to the sparse raw data space, custom-character is the Fourier transform operator, ρj is the reconstructed magnitude of the jth data set, e is Euler's constant, i=√{square root over (−1)} and φj is the reconstructed phase of the jth data set.


Either of the above methods may incorporate any of the following additional refinements.


Optionally, the Navier-Stokes problem constrains u such that u satisfies:














u
·


u


-

v

Δ

u

+


p


=
0




in


Ω







(
i
)

















·
u

=
0




in


Ω







(
ii
)















u
=
0




on


Γ







(
iii
)















u
=

g
i





on



Γ
i



where


an


inlet


exists







(
iv
)


















-
v





n

u


+
pn

=

g
o





on



Γ
o



where


an


outlet


exists







(
v
)







wherein p is the reduced pressure, p0/ρ, ρ is a density of the fluid, p0 is the pressure, n is a unit normal vector on ∂Ω, and ∂n≡n·∇ is a normal derivative on ∂Ω.


This format ensures that the physics of fluid flows is captured and the possible results space is thereby constrained to physically realistic solutions.


Optionally the custom-character penalty term includes Nitsche penalty terms for weakly imposing a Dirichlet boundary condition on at least a portion of the boundary, ∂Ω=Γ∪Γi∪Γo.


This allows the fluid flow to be modelled accurately by implementing appropriate boundary conditions at specific parts of the boundary.


Optionally, the error functional, custom-character is minimised using Euler-Lagrange equations, constrained to spaces of admissible perturbations in u and p, denoted u′ and p′ respectively, where p is the reduced pressure, p0/ρ, where ρ is a density of the fluid and p0 is the pressure.


Euler-Lagrange equations are a powerful method, suitable for identifying the directions in which input parameters should be perturbed in order to reduce the error functional.


The Euler-Lagrange equations may be formed by considering the first variations of the custom-character and custom-character penalty terms with respect to {x} u and p.


This allows the effect of perturbations in the parameters which affect the custom-character and custom-character penalty terms to be considered. Since the custom-character penalty term does not depend on u and p, there is no variation of custom-character with respect to these parameters.


First variations of the custom-character and custom-character penalty terms may be used to provide an adjoint Navier-Stokes problem:









-






u
·

(



v

+


(


v

)




)


-

v

Δ

v

+


q


=


-

D
u







in


Ω








(
vi
)

















·
v

=
0




in


Ω







(
vii
)















v
=
0




on


Γ


and



Γ
i



where


an


inlet


exists







(
viii
)



















(

u
·
n

)


v

+


(

u
·
v

)


n

+

v




n

v


-
qn

=
0




on



Γ
o



where


an


outlet


exists




;




(
ix
)







in which v is an adjoint velocity and q an adjoint pressure, (∇v) represents the adjoint of (∇v), and Ducustom-character is a generalised gradient of custom-character with respect to u.


The adjoint problem is a useful step in elucidating the effect of changes in the input parameters on the error functional. By solving the adjoint equation the generalised gradients can be efficiently calculated.


Optionally, the calculation of the custom-character penalty term is based on the norm of the difference between the MRV velocity data, u*, and the reconstructed velocity field, u, mapped onto the space of the MRV velocity data, the norm being based on a covariance operator. This is a simple yet effective means by which the reconstructed velocity data can be constrained to be consistent with the measured MRV data.


The MRV data may be assumed to have a Gaussian noise variance and the custom-character penalty term is calculated as:








(
u
)


=


1
2







u
*

-

S

u





C

u
*


2






in which ∥ . . . ∥C represents a norm in a covariance-weighted L2 space having covariance operator C, S projects from the model space to the data space, and the covariance operator Cu* introduces the Gaussian noise variance into the custom-character penalty term. This format provides a clear and well-behaved model for noise inherent in the MRV data.


Optionally the calculation of the custom-character penalty term includes assuming that some or all of the unknown parameters follows a Gaussian distribution of likelihood and improbable realisations are penalised by forming custom-character as a linear sum of an error functional term for each unknown parameter in the set {x} which is assumed to have a Gaussian distribution of the form:








(
x
)


=


1
2






x
-

x
_




C
2






in which x represents any of the unknown parameters which is assumed to have a Gaussian distribution, x represents a prior value for that unknown parameter, and C is an appropriate covariance operator for that unknown parameter. In other words, the prior value of a parameter is the initial value of that parameter, which is then updated as the algorithm is executed.


Once more, the Gaussian format is one which results in clear and well-behaved functions for modelling and provides a firm statistical basis for constraining the space of possible reconstructions to statistically plausible ones.


Optionally the covariance operator for the boundary condition at the inlet, Cgi, in cases where an inlet exists, is:







C

g
i


=



σ

g
i

2

(

I
-



2




Δ
~

ι



)


-
1






in which σgi is the variance of gi, custom-character is a characteristic length scale of an exponential covariance function, I is the identity matrix and {tilde over (Δ)}i is the Laplacian operator extended to incorporate the boundary condition that gi=0 on Γi, and/or the covariance operator for the boundary condition at the outlet, Cgo, in cases where an outlet exists, is:







C

g
o


=



σ

g
o

2

(

I
-



2




Δ
~

o



)


-
1






in which σgo is the variance of go, custom-character is a characteristic length scale of an exponential covariance function, I is the identity matrix and {tilde over (Δ)}o is the Laplacian operator extended to incorporate the boundary condition that ∂ngo=0 on Γo.


These forms for covariance operators are suitable for encoding the boundary conditions into the respective penalty terms and which in turn ensure that the space of possible reconstructions is constrained to appropriate outputs.


Optionally the shape of the boundary is perturbed under a speed field custom-character=ζn, in which ζ represents the magnitude of the speed at which points on the boundary are perturbed and n is the unit vector normal to each point on the boundary. This provides a convenient formalism to describe the location of the boundary and to implement changes in the location of the boundary. As an example, the perturbations of the shape of the boundary may be implemented using Reynold's transport theorem.


Optionally, any boundary which is an inlet or an outlet is not perturbed. This may be a useful feature to reduce the complexity of the perturbations in cases where, for example, the inlet and/or the outlet aligns with the edge of the imaging region, so that the location of this part of the boundary is defined as being located in the same place (at the edge of the imaging area).


Optionally the perturbation of the shape of the boundary is used to derive a shape derivatives problem defined over the permissible perturbations of u and p, denoted u′ and p′, respectively, the shape derivatives problem having the form:









(
x
)







-

u



·



u


+


u
·
Δ



u



-

v

Δ


u



+




p




=
0




in


Ω






(
xi
)






·

u



=
0




in


Ω






(
xii
)





u


=

-



n


u

(

𝒱
·
n

)







on


Γ






(
xiii
)





u


=
0




on



Γ
i



where


an


inlet


exists






(
xiv
)







-
v





n


u




+


p



n


=
0




on



Γ
o



where


an


outlet


exists







The solutions to this problem provide a convenient framework for assessing the effect of the boundary shape perturbations on the other parameters.


Optionally the boundary is represented implicitly by signed distance functions, φ± defined across the imaging region, I, in which the boundary is represented by parts of φ± which are equal to zero, and the magnitude of φ± at a given point represents a shortest distance between that point and the boundary.


For example, the signed distance function may be negative inside the boundary (region Ω), and positive outside that region. The other way around (positive inside, negative outside) is of course possible if appropriate corresponding changes of sign are included in later expressions derived from the signed distance function. This implicit representation of the boundary location provides a simple yet powerful method of deforming (perturbing) the boundary and determining the effect of the perturbations on the other parameters.


The normal vector, n, may be propagated across a region corresponding to the full set of MRV data to form a normal vector field, n which runs perpendicular to contours of equal magnitude of φ±. Similarly, the shape gradient may be extended away from the boundary to form an extended shape gradient, {dot over (ζ)}, covering the full set of MRV data, wherein the extension of the shape gradient comprises propagating the shape gradient along the normal vector field lines according to a convection-diffusion equation.


This propagation extends the domain of the shape gradient and the normal vector from their definitions only on the boundary to the entire imaging region. This in turn provides a general framework for understanding the effect of the boundary being located in any part of the imaging region.


Optionally the direction of perturbation of each parameter is in a direction of steepest change in custom-character. By perturbing the parameters in their direction of steepest change, the largest effect will be seen on the magnitude of custom-character. In other words, this may maximise the improvement (maximise the reduction in custom-character) seen in making a change to a particular value of a parameter. Note that in some cases, the appropriate direction of change may not be exactly aligned with the direction of steepest change in custom-character, since considering the effect of each parameter on custom-character individually ignores the combined effect of changing multiple parameters at once.


Various approaches can be used to determine the “direction of steepest change”, for example, the “steepest descent” method, the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm, conjugate gradient methods, and so forth.


Optionally, the direction of steepest change of custom-character is derived by reconstructing an inverse Hessian {tilde over (H)} matrix for each unknown parameter in the set {x}, and wherein the direction of steepest change of custom-character is preconditioned by the reconstructed inverse Hessian matrix.


The direction of steepest change may be thought of as the steepest descent direction, which is given by the generalised gradients of custom-character. with respect to the unknown parameters. The search direction for each of the unknown parameter in the set {x} may be derived by reconstructing an approximated Hessian matrix.


Optionally the perturbation direction, s, for each individual unknown parameter in the set {x} is given by −{tilde over (H)}x{circumflex over (D)}xcustom-character where {circumflex over (D)}x is the steepest ascent direction of custom-character for perturbations in parameter x, and {tilde over (H)}x is the reconstructed inverse Hessian of each unknown parameter in the set {x}, for example.


The reconstructed inverse Hessian custom-character of each unknown parameter in the set {x} may be used to provide an approximated covariance matrix for the unknown parameters thereby to estimate the uncertainties in the model parameters. The estimation of uncertainties is important for validating the output result. For example, the uncertainty can be used to assess an appropriate level of trust in the data. Thus, not only is a user provided with a reconstructed velocity field which can be used as the basis for making decisions (whether and how to operate on a patient where the data is medical, how to best optimise design of a system for e.g. mechanical systems, etc.), but also with an indication of how likely their decision is to be based on correct information. Where the errors are too large, the user may be faced with too great an uncertainty in the output data and may elect instead to repeat the data acquisition and/or the processing (e.g. with different weightings and model parameters) to try to improve the situation.


Optionally the reconstructed velocity, u, is updated by solving an Oseen problem for the improved parameters:












u
k

·




u

k
+
1




-

v

Δ


u

k
+
1



+




p

k
+
1




=
0











·

u



=
0









with


boundary


conditions
:









u
=
0




on


Γ






u
=

g
i





on



Γ
i



where


an


inlet


exists









-
v





n

u


+

p

n


=

g
o





on



Γ
o



where


an


outlet


exists







in which k is an index to distinguish input values at k and improved values at k+1, p is the reduced pressure, p0/ρ, ρ is a density of the fluid, p0 is the pressure, n is a unit normal vector on ∂Ω, and ∂n≡n·∇ is a normal derivative on ∂Ω.


The updated parameters set, {x}k+1, does not contain the updated velocity field uk+1. The updated model parameters {x}k+1 are used to compute the updated velocity field uk+1. Since the Navier-Stokes problem is nonlinear, the Oseen problem, which is a linearization of the Navier-Stokes problem around uk, is considered instead. To make an analogy: if the reconstructed velocity field u is a parametric curve, the set {x} contains its parameters. Computation of u(xk+1) is necessary to find the new uk+1.


Optionally steps (c) to (f) are repeated one or more times whereby the improved values of the unknown parameters from a preceding iteration are used in place of the initial values in each subsequent execution of steps (c) to (f); and wherein step (g) uses the improved values from the most recent iteration to calculate the reconstructed u.


By iterating the procedure, a gradual improvement in the output is seen. Over time, the process is expected to converge to a particular output, at which point the procedure fails to continue to improve the outputs (i.e. fails to continue reducing the magnitude of custom-character).


Optionally, each unknown parameter is adjusted a respective first step size in its respective direction during the first iteration of the method and a subsequent step size during each subsequent iteration of the method.


As noted above the direction in which each parameter is perturbed may be determined using various methods. To consider two of these: (i) simple gradient descent; and (ii) reconstructing an inverse Hessian matrix and use a preconditioned gradient descent (the BFGS algorithm). The second option allows the estimation of uncertainties, while the first does not.


If the user selects the simple gradient descent, the strategy for the step size may be chosen as follows: start with a step size t and if the line search is unsuccessful set tk+1=0.5*tk and repeat the line search. If the line search is successful set tk+1=1.5tk.


If the user selects BFGS, start every line-search with t=1 (i.e. each iteration's line search starts with t=1). If the line-search fails, set tk+1=0.5tk and repeat the line search.


Optionally, each subsequent step size is chosen to increase the rate at which custom-character is reduced.


For example, where the rate at which custom-character is reduced is too slow, the user may elect to (temporarily) increase the step size. In this way an adaptive step size can be selected to cause custom-character to reduce as quickly as possible.


In other cases, the step size may be constrained to always reduce in subsequent iterations from the step size of the previous iteration. For example, the step change may reduce each time e.g. constant fraction, e.g. always ½, ⅓, ⅔ etc.


Optionally, steps (c) to (f) of the method are repeated until a stop condition is met. Optionally the stop condition is that custom-character does not reduce further or where a covariant weighted norm for the perturbations in the unknown parameters is below a user-specified threshold.


Optionally the method includes outputting values of p, the pressure field, and the improved values for the unknown parameters in the set {x} as part of step (g).


Optionally, the reconstruction uses an immersed boundary finite element method. This is a suitable framework in which to apply the methods described herein because it is a natural environment in which to apply the methods, which is also well-studied and mathematically rigorous.


Optionally, a magnitude value of Magnetic Resonance Imaging (MRI) data is used as an input to provide an initial value for the shape of the boundary, ∂Ω. MRI is a useful technique for imaging of a wide range of materials such as those from which a boundary may be formed and fluids which may be flowing within the boundary. MRI is non-invasive and is therefore particularly suitable for biological applications, including in vivo measurements. By directly measuring the shape of the boundary, good input data can be provided to improve the certainty and speed of the convergence process. As noted elsewhere in this document MRV data requires at least four measurements per dimension of the region of interest. By contrast, MRI data on the equivalent region can be derived by a single raster scan of the pixels (or voxels) in the region of interest.


Optionally, the magnitude value of the magnetic resonance velocimetry data is extracted from the magnetic velocimetry data. This allows the location of the boundary to be extracted from the exact same data set as that from which the flow data is extracted, thereby improving the correspondence between the initial value and the “true” location of the boundary. In cases where the same input MRI data are used for the velocimetry (e.g., phase contrast MRI) as for the initial boundary location (the magnitude of the MRI signal(s)), the convergence rate of the methods set out herein is improved since a very reliable prior for the boundary shape/location is provided. In some cases, since the MRV dataset includes multiple measurements to extract the flow information, the magnitude data may be used to extract the boundary location from more than one or even from all measurements forming part of the MRV dataset to improve confidence in the input shape of the boundary. Optionally suitable averaging or other ways of combining the information may be used to improve the accuracy of the boundary location when multiple data sets are used to derive the boundary location. This results in a higher signal-to-noise ratio for boundary locations extracted in this manner. In yet further examples the magnitude value of the magnetic resonance data is extracted from a separate magnetic resonance imaging dataset.


The methods set out above may further comprise a step of providing a compressed form of the modelled flow field, the compressed form being represented by the shape of the boundary, the kinematic viscosity, and the inlet and/or outlet boundary conditions where an inlet and/or outlet exists. Optionally, this compression step may further comprise discarding the modelled flow field and retaining only the compressed form. Once the compressed form is provided, a further decompression step may optionally further include decompressing the compressed form by: using ∂Ω, v, gi and go as inputs to a Navier-Stokes problem; and calculating the modelled flow field by solving the Navier Stokes problem to derive a flow field consistent with the inputs and the Navier-Stokes equations. The decompression may be performed in a remote location and/or at a later time relative to the compression step, for example. The benefits of the compression and decompression aspects are set out in more detail below.


Also disclosed herein is a computer implemented method for compressing flow data of a fluid, the method comprising the steps of: receiving raw data encoding information in an imaging region I relating to a fluid velocity field, u*, within a subset of the imaging region I forming a region, Ω, enclosed by a boundary, ∂Ω, the boundary including at least a physical boundary portion, Γ, and optionally further including an inlet portion, Γi, and/or an outlet portion, Γo; extracting from the data a modelled flow field over the region Ω, a shape of the boundary ∂Ω, enclosing the region Ω, a boundary condition, gi, at an inlet where an inlet exists, a boundary condition, go, at an outlet where an outlet exists and a kinematic viscosity, v, of the fluid; and providing a compressed form of the modelled flow field, the compressed form being represented by the shape of the boundary, the kinematic viscosity, and the inlet and/or outlet boundary conditions where an inlet and/or outlet exists.


As will be clear, any flow field can be compressed by extracting the set of parameters {x}≡{∂Ω; gi; go; v} from the flow data. This represents a large reduction in the space needed for storage of the flow field, in effect trading off reduced storage space (or transmission bandwidth) for increased processor usage in retrieving the full flow field because instead of simply loading the flow field data, the system must recreate it by solving a Navier-Stokes problem, using the compressed form as inputs. It can be seen in this way that the compression and decompression methods (and the corresponding compression and decompression hardware and software) operate as a plurality of interrelated products in the sense that they are intended to interoperate and to be used in conjunction with one another.


That the dataset of the compressed data is smaller than the full flow field can be seen from an analysis of the dimensionality of the system. A full flow field over the region Ω requires storing flow vectors (at the desired resolution/accuracy) for all points across the region. Note that where a full 3D analysis is performed, Ω is a 3D volume, while where the analysis is 2D, Ω is an area. By contrast the boundary ∂Ω of a 3D Ω is a 2D surface (albeit usually a curved one requiring 3 dimensions to represent) while the boundary ∂Ω of a 2D Ω is a 1D line (albeit usually a curved one requiring 2 dimensions to represent). It is clear that the boundary conditions also follow this pattern. In other words, where the flow field is modelled over a region having dimensionality d leads to compressed form of the modelled flow field which has a dimensionality of d−1. Reducing the dimensionality vastly reduces the complexity of the storage requirements of a d-dimensional flow field from O(nd) to O(nd-1) where the kinematic viscosity is assumed to be constant over the region so has an effective dimensionality of 0.


Note that usually, the kinematic viscosity, v, is not a function of space, it is either a constant or a tensor (matrix) that depends on the velocity and certain physical constants. In certain cases, the viscosity may be more complex than this, for example in turbulence modelling, viscosity may be effectively variable in space. However, this variation in space would be controlled by a turbulence model which once more contains just a few physical parameters. Therefore, even though turbulent flows are not the focus of the present work, for under-resolved turbulent flow the compression/decompression methods set out herein would apply in the same general manner but with an additional a turbulence model. This slightly increases the complexity of the v term in the compressed flow, but nevertheless results still in a large reduction in the storage and/or bandwidth requirements for storage and/or transmission respectively of the compressed flow field.


In this way, the richness of the flow field can be captured in a compressed form requiring far less storage or transmission capacity. In general the compression and decompression aspects set out herein can use the data provided by the MRV methods set out above, although this is not necessary. However, when the MRV methods set out herein are used, a synergistic benefit is seen. Since the methods for extracting flow filed data set out herein lead to a set of parameters {x} in which a high degree of confidence can be placed, the compression of this data is highly effective since the high-accuracy parameters derived in this way capture a large amount of information related to the flow field. Much of the same concepts can be used in decompressing the data later, as will be seen.


Once the data are compressed in this way, the modelled flow field may be discarded.


The method may be repeated on a series of received raw data sets, each raw data set representing the same system at different times. A full time evolution of a system represents a very large dataset and in some cases the data handling requirements of such a system may be prohibitive. By compressing each time step of the data much more can be captured of the time evolution of the system.


As noted above the raw data may be derived from a Magnetic Resonance Velocimetry measurement. In particular extracting the modelled flow field may include: receiving initial values for a set of unknown parameters, {x}, including at least: a shape of the boundary, ∂Ω, within which the fluid is located; a kinematic viscosity of the fluid, v, and optionally an inlet boundary condition for the fluid, gi where an inlet portion exists; and/or an outlet boundary condition for the fluid, go where an outlet portion exists. Receiving initial values of the unknown parameters {x} may further include performing a separate measurement of one or more of the parameters. An initial value of the shape of the boundary may be provided by a magnetic resonance imaging measurement. As noted above, this can improve the accuracy of the process. The method may further include calculating a model fluid velocity field, uo, using the initial values as inputs to a Navier-Stokes problem, and solving for the velocity field within and at the boundary. In some cases, method includes constructing an error functional, custom-character is constructed, having the form:






=

+
+





in which custom-character is a penalty term which penalises deviations in uo from the received MRV data; custom-character is a penalty term which penalises outcomes for in uo which arise from statistically unlikely values for the parameters in the set of unknown parameters, {x}, and custom-character is a penalty term which penalises deviations in uo from the Navier Stokes problem, and the method includes: determining a generalised gradient of custom-character with respect to each of the unknown parameters in the set {x}; for each of the unknown parameters in the set {x}, using the generalised gradient for that parameter, evaluated at the initial value of all of the parameters, to determine a direction in which perturbing each parameter reduces the magnitude of custom-character and thereby determining improved values for each of the unknown parameters in the set {x}; and reconstructing the MRV data by outputting the improved values for the unknown parameters and calculating the reconstructed velocity, u, across at least the region Ω.


In some cases (e.g. 2D imaging of a tube viewed with its axis into/out of the imaging plane, modelling of a complete 3D system), there may not be an inlet or an outlet, and it is in this sense that the inclusion of inlet and/or outlet boundary conditions is optional.


The above procedure uses physics considerations (via the Navier-Stokes equations) and statistical likelihood to reduce the space of suitable reconstructions. In general the direction of change of the unknown parameters which causes a reduction in custom-character is calculated independently for each parameter, and the collective result of changing all the parameters in their respective direction is assessed by considering whether the collective result also reduces the magnitude of custom-character. Where a full global inverse Hessian reconstruction is possible to calculate, the directions of change of each parameter could be chosen to guarantee that the collective effect of all changes in parameters is to reduce the magnitude of custom-character.


It is an interesting and unexpected effect that starting with less information (that is by also treating the boundary location as unknown and allowing this to be optimised along with the other unknown parameters) allows for improved image reconstruction. This is important because MRV data is time consuming to obtain and obtaining low noise data is yet more time consuming as many scans must be made and averaged. By providing an efficient and reliable reconstruction algorithm, the overall time to acquire low-noise MRV data can be reduced substantially. This allows high quality values for the parameters {x} to be derived, which in turn means that the compressed form of the modelled flow field retains a high level of accuracy.


The method steps may be repeated one or more times whereby the improved values of the unknown parameters from a preceding iteration are used in place of the initial values in each subsequent execution; and wherein the reconstructing step uses the improved values from the most recent iteration to calculate the reconstructed u. By iterating the procedure, a gradual improvement in the output is seen. Over time, the process is expected to converge to a particular output, at which point the procedure fails to continue to improve the outputs (i.e. fails to continue reducing the magnitude of custom-character).


Each unknown parameter may be adjusted a respective first step size in its respective direction during the first iteration of the method and a subsequent step size during each subsequent iteration of the method. As noted above the direction in which each parameter is perturbed may be determined using various methods. To consider two of these: (i) simple gradient descent; and (ii) reconstructing an inverse Hessian matrix and use a preconditioned gradient descent (the BFGS algorithm). The second option allows the estimation of uncertainties, while the first does not. If the user selects the simple gradient descent, the strategy for the step size may be chosen as follows: start with a step size t and if the line search is unsuccessful set tk+1=0.5*tk and repeat the line search. If the line search is successful set tk+1=1.5tk. If the user selects BFGS, start every line-search with t=1 (i.e. each iteration's line search starts with t=1). If the line-search fails, set tk+1=0.5tk and repeat the line search.


Each subsequent step size may be chosen to increase the rate at which custom-character is reduced. For example, where the rate at which custom-character is reduced is too slow, the user may elect to (temporarily) increase the step size. In this way an adaptive step size can be selected to cause custom-character to reduce as quickly as possible.


In other cases, the step size may be constrained to always reduce in subsequent iterations from the step size of the previous iteration. For example, the step change may reduce each time e.g. constant fraction, e.g. always ½, ⅓, ⅔ etc.


Optionally, the method steps are repeated until a stop condition is met, optionally wherein the stop condition is that custom-character does not reduce further or where a covariant weighted norm for the perturbations in the unknown parameters is below a user-specified threshold.


Optionally, the method further comprises storing or transmitting to a remote location the compressed form of the modelled flow field.


A corresponding computer implemented method of decompressing a compressed modelled flow field of measured flow data of a fluid is also disclosed. The compressed flow data includes a shape of a boundary, ∂Ω, of a region, Ω, in which the flow field is defined, a kinematic viscosity, v, of the fluid, and an inlet boundary condition, gi, and/or an outlet boundary condition, go, where an inlet and/or outlet exists, and the decompression method comprising the steps of: using ∂Ω, v, gi and go as inputs to a Navier-Stokes problem; and calculating the modelled flow field by solving the Navier Stokes problem to derive a flow field consistent with the inputs and the Navier-Stokes equations. Note that this allows the storage of a full flow field in its compressed format, by storing the set of parameters. In particular, the compressed modelled flow field may be derived from the compression methods set out above. This is particularly advantageous as there are few, if any, other known methods of providing high quality information on the unknown parameters {x}. Consequently, the disclosure of the present document as a whole represents a complete solution which is simply not possible using methods for analysing flow data other than those set out herein. Moreover, since no existing methods exist which allow the data to be compressed in the manner set out above, the decompression step inherently and independently makes an inventive contribution to the art because decompression of compressed data derived from real world measurements simply isn't possible with a reasonable degree of accuracy unless high quality value for the parameters {x} are available.


As will be apparent, the compression method set out above may be followed by at least one of a transmission or a storage step of the compressed form of the flow field, followed by the decompression method set out above, applied to the compressed form of the flow field at a later time and/or a different location.


Also disclosed herein is an apparatus for reconstruction of magnetic resonance velocimetry, MRV, data, the apparatus comprising: a processor configured to perform any of the method steps set out above.


The apparatus may further comprise a magnetic resonance imaging system, arranged to provide the MRV data.


Also disclosed herein is a non-transient computer readable medium comprising instructions which cause a computer to enact any of the method steps set out above.


The systems and methods described above may be applied to any MRV data, for example to medical MRV data, but also to non-medical MRV data (e.g. petrochemical, monitoring of flows in pipelines and other fluid delivery infrastructure). That is to say that without loss of generality, the invention may be segregated into medical uses and non-medical uses.





BRIEF DESCRIPTION OF THE FIGURES

The invention will now be described with reference to the Figures, by way of illustration only, in which:



FIG. 1 shows a schematic of the imaging space and introduces the various different spaces (data space, model space, extracted noise);



FIGS. 2a to 2f illustrate the procedure for extending parameters normally defined only on a boundary across the entire imaging region, in order to formalise the discussion of perturbations of the boundary location;



FIGS. 3a to 3j show results of the reconstruction process on synthetic MRV data of flow through a converging nozzle;



FIGS. 4a to 4j show results of the reconstruction process on synthetic MRV data of flow through a simulated blood vessel;



FIGS. 5a and 5b compare extracted wall shear rate for each of the synthetic datasets presented in FIGS. 3a to 3j and 4a to 4j;



FIG. 6a shows a schematic of the experimental apparatus for deriving MRV data;



FIG. 6b shows a detailed view of the converging nozzle;



FIG. 6c is a schematic representing the pulse sequences used;



FIGS. 7a and 7b show high signal-to-noise ratio (SNR) MRV data derived from the apparatus and pulse in FIGS. 6a to 6c;



FIGS. 8a to 8j show low SNR images of the same flow system as FIG. 7, including a reconstruction of those images in line with the reconstruction process set out herein;



FIG. 9 shows extracted wall shear rate for each of the datasets presented in FIGS. 7a, 7b and 8a to 8j;



FIG. 10 is a flow chart describing the general operational steps of the method;



FIG. 11 is an illustration of the differences between zero-filling and reconstructing in the context of k-space data;



FIGS. 12a to 12d compares the zero-filling and reconstruction methods using real MRV data, similar to that presented in FIGS. 8a to 8j;



FIG. 13 is a flow chart describing the general operational steps of the method when the data are sparse in k-space



FIG. 14 represents a conceptualisation of a compression algorithm in which a d-dimensional flow field is encoded in a (d−1)-dimensional form



FIG. 15 is a flow chart illustrating a compression algorithm



FIG. 16 is a flow chart illustrating a decompression algorithm; and



FIG. 17 is a schematic of a system on which the methods described above may be executed.





DETAILED DESCRIPTION

A generalized inverse Navier-Stokes problem for the joint reconstruction and segmentation of noisy velocity images of incompressible flow is described below. To regularize the inverse problem, a Bayesian framework is adopted by assuming Gaussian prior distributions for the unknown model parameters. Although the inverse problem is formulated using variational methods, every iteration of the nonlinear problem is actually equivalent to a Gaussian process in Hilbert spaces. The boundaries of the flow domain are implicitly defined in terms of signed distance functions and use Nitsche's method to weakly enforce the Dirichlet boundary condition on the moving front. The moving of the boundaries is expressed by a convection-diffusion equation for the signed distance function, which allows control of the regularity of the boundary by tuning an artificial diffusion coefficient. The steepest ascent directions of the model parameters are used in conjunction with a quasi-Newton method (BFGS), and it is shown how the posterior Gaussian distribution of a model parameter can be estimated from the reconstructed inverse Hessian.


An algorithm for running on a computer is devised that solves this inverse Navier-Stokes problem and is tested for noisy (SNR=3) 2D synthetic images of Navier-Stokes flows. The algorithm successfully reconstructs the velocity images, infers the most likely boundaries of the flow and estimates their posterior uncertainty. A magnetic resonance velocimetry (MRV) experiment is then designed to obtain images of a 3D axisymmetric Navier-Stokes flow in a converging nozzle.


MRV images are acquired, having poor quality (SNR≅6), intended for reconstruction/segmentation, and images of higher quality (SNR>30) that serve as the ground truth. It is shown that the algorithm performs very well in reconstructing and segmenting the poor MRV images, which were obtained in just 10.2 minutes, and that the reconstruction compares well to the high SNR images, which required a total acquisition time of ≅4.6 hours.


The reconstructed images and the segmented (smoothed) domain are used to estimate the posterior distribution of the wall shear rate and compare it with the ground truth. Since the wall shear rate depends on both the shape and the velocity field, it is seen that the algorithm provides a consistent treatment to this problem by jointly reconstructing and segmenting the flow images, avoiding the design of an additional experiment (e.g. CT, MRA) for the measurement of the geometry, or the use of external (non-physics-informed) segmentation software.


Lastly, an extension of the method is presented which addresses cases where the received signals are sparse (i.e. incomplete in k-space).


The present method has several advantages over general image reconstruction and segmentation algorithms, which do not respect the underlying physics and the boundary conditions and provides additional knowledge of the flow physics (e.g. pressure field), which is otherwise difficult to measure. The method naturally extends to 3D, periodic Navier-Stokes problems in complicated geometries to substantially decrease signal acquisition times and provide additional knowledge of the physical system being imaged.


The generalized inverse Navier-Stokes problem is formulated in the following manner, including the derivation of an algorithm for its solution. In what follows, L2(Ω) denotes the space of square-integrable functions in Ω, with inner product custom-character⋅,⋅custom-character and norm ∥⋅∥L2(Ω). Hk(Ω) is the space of square-integrable functions with k square-integrable derivatives in Ω. For a given covariance operator, C, also defined are the covariance-weighted L2 spaces, endowed with the inner product custom-character⋅,⋅custom-characterC:=custom-character⋅,C−1custom-character, which generates the norm ∥⋅∥C. The Euclidean norm in the space of real numbers custom-charactern is denoted by |⋅|custom-character.


In general, an n-dimensional velocimetry experiment usually provides noisy flow velocity images on a domain I⊂custom-charactern, depicting the measured flow velocity u* inside an object Ω⊂I that is the boundary is a subset of the overall domain (sometimes called the imaging region), I with boundary ∂Ω=Γ∪Γi∪Γo—that is, a boundary, Γ, delimiting regions of I which do and do not contain the fluid, and inlet and outlet boundaries, Γi, and Γo respectively located at the edges of I. These features can be seen, for example in FIG. 1.


An appropriate model for this procedure is the Navier-Stokes problem:














u
·



u


-

v

Δ

u

+



p


=
0




in


Ω








·
u

=
0




in


Ω






u
=
0




on


Γ






u
=

g
i





on



Γ
i



where


an


inlet


exists









-
v





n

u


+

p

n


=

g
o





on



Γ
o



where


an


outlet


exists







(
2.1
)







where u is the velocity, p is the reduced pressure calculated as the raw pressure p0 divided by the fluid density, ρ, v is a kinematic velocity, n is a unit normal vector on ∂Ω, and ∂n≡n·∇ is a normal derivative on ∂Ω. Here boundary conditions are needed to describe behaviour on parts of the boundary which exist at the edge of the imaging space (so do not necessarily restrict the fluid flow, but allow flow in and out, for example) are gi for the inlet Γi, and go for the outlet Γo. These are Dirichlet boundary conditions.


The data space is denoted by custom-character and the model space by custom-character, and it is assumed that both spaces are subspaces of L2. In the 2D case, u*=(ux*,uy*), and the covariance operator is introduced as:










C

u



=

diag



(



σ

u
x
*

2


I

,


σ

u
y


2


I


)






(
2.2
)







where σu*x2 and σu*y2 are the Gaussian noise variances of u*x and u*y respectively, and I is the identity operator. The discrepancy between the measured velocity field u*∈D and the modelled velocity field u∈M is measured on the data space D using the reconstruction error functional:











(
u
)


=



1
2







u
*

-
Su




C

u
*


2


=


1
2





I



(


u
*

-
Su

)




C
u

-
1


(


u
*

-
Su

)









(
2.3
)







where S:custom-charactercustom-character is the L2-projection from the model space custom-character to the data space custom-character.


The goal in this invention is to infer the unknown parameters of the Navier-Stokes problem (2.1) such that the model velocity u approximates the noisy measured velocity u* in the covariance-weighted L2-metric defined by custom-character. In the general case, the unknown model parameters of (2.1) are the shape of Ω (i.e. exactly where the boundary constraining the fluid is located), the kinematic viscosity v, and the boundary conditions gi, go.


This inverse Navier-Stokes problem leads to the nonlinearly constrained optimization problem:











find


u

°





arg

min



Ω
,
x





(

u

(

Ω
;
x

)

)



,

such


that






u


satisfies


equation



(
2.1
)








(
2.4
)








where uo is the reconstructed velocity field, and x=(gi; go; v). Like most inverse problems, (2.4) is ill-posed and hard to solve. To alleviate the ill-posedness of the problem the search for values of the unknowns (Ω; x) needs restricting to function spaces of sufficient regularity.


Consider FIG. 1, which illustrates the problem addressed herein. Given the images of a measured velocity field u*, an inverse Navier-Stokes problem is solved to infer the boundary Ω, the kinematic viscosity v, and the inlet velocity profile on Γi. The solution to this inverse problem is a reconstructed velocity field uo, from which noise and artefacts (u*−Suo) have been filtered out. The upper left image shows the input image data, including artefacts, the central upper image shows the output, having had the noise and artefacts removed in accordance with the methods described herein, and the upper right image shows the residual noise and artefacts which needed to be subtracted from the input velocity to derive the reconstructed velocity field. The lower part of FIG. 1 shows the definitions of the various parts of the data. The entire imaging region is denoted I, the fluid is located in a subset of I, Ω, with boundary ∂Ω=Γ∪Γi∪Γo—that is, a boundary, Γ, delimiting regions of I which do and do not contain the fluid, and inlet and outlet boundaries, Γi, and Γo respectively.


The regularization proceeds as follows. If x∈L2 is an unknown parameter, one way to regularize the inverse problem (2.4) is to search for minimizers of the augmented functional custom-character=custom-character+custom-character, where:












(
x
)


=





α

(

x
-

x
¯


)

2


+


β

(



x

-



x
¯



)

2






(
2.5
)







is a regularization norm for a given prior assumption x, and for given weights custom-character∃α,β>0. This simple idea can be quite effective because by minimizing. custom-characterx is forced to lie in a subspace of L2 having higher regularity, namely H1, and as close to the prior value x as α, β allow. However, a drawback of this method is that, in this setting, the choice of α, β, and even the form of custom-character, is arbitrary.


There is a more intuitive approach that recovers the form of the regularization norm custom-character from a probabilistic viewpoint. In the setting of the Hilbert space L2, the Gaussian measure has a probability distribution γ˜custom-character(m,C) and has the property that its finite-dimensional projections are multivariate Gaussian distributions, and it is uniquely defined by its mean m∈L2 and its covariance operator C:L2→L2. This can be seen by noting the mean of a Gaussian measure in L2 is given by:







m


𝔼

h


:=




L
2



h


γ

(
dh
)







and the covariance operator and the covariance C:L2×L2custom-character are defined by:







Cx
:=




L
2



h




x
,
h





γ

(
dh
)




,


C

(

x
,

x



)

:=




L
2



h




x
,
h








x


,
h





γ

(
dh
)








noting that custom-characterCx,x′custom-character=C (x, x′). The above (Bochner) integrals define integration over the function space L2, and under the measure γ, and are well defined due to Fernique's theorem. These integrals can be directly computed by sampling the Gaussian measure with Karhunen-Loeve expansion (as set out below in the discussion of estimation of uncertainty).


It can be shown that there is a natural Hilbert space Hγ that corresponds to γ, and that






H
γ=√{square root over (C)}(L2)


In other words, if x is a random function distributed according to γ, any realization of x lies in Hγ, which is the image of √{square root over (C)}. Furthermore, the corresponding inner product:













x
,

x





C

=





C

-

1
2




x

,


C

-

1
2





x










(
2.6
)







is the covariance between x and x′, and the norm ∥x∥C2=custom-characterx, xcustom-characterC is the variance of x. Therefore, if x is an unknown parameter for which a priori statistical information is available, and if the Gaussian assumption can be justified, the following form can be chosen:











(
x
)


=



1
2


‖x

-


x
¯




C
2







(
2.7
)







In this way, custom-character=custom-character+custom-character increases as the variance of x increases. Consequently, minimizing custom-character penalizes improbable realizations of the reconstruction.


As mentioned previously, the unknown model parameters of the Navier-Stokes problem (2.1) are the kinematic viscosity v, the boundary conditions gi, go, and the shape of Ω. Since the kinematic viscosity v is considered to be constant, the regularizing norm is simply:











1
2






"\[LeftBracketingBar]"


v
-

v
¯




"\[RightBracketingBar]"




v

2


=


1

2


σ
v
2








"\[LeftBracketingBar]"


v
-

v
¯




"\[RightBracketingBar]"



2






(
2.8
)







where vcustom-character is a prior guess for v, and σv2 is the variance of v. For the Dirichlet boundary condition, gi∈L2i), it is appropriate to choose the exponential covariance function:










C

(

x
,

x



)

=




σ

g
i

2


2




·
exp




(


-



"\[LeftBracketingBar]"


x
-

x
¯




"\[RightBracketingBar]"





)






(
2.9
)







with variance σgi2custom-character and characteristic length custom-charactercustom-character. For zero-Dirichlet (no-slip) or zero-Neumann boundary conditions on ∂Γi equation 2.9 leads to the norm:













g
i




C

g
i


2





1

σ

g
i

2









Γ
i




g
i
2



+




2

(



g
i


)

2






(
2.1
)







Using integration by parts the covariance operator is found to be:










C

g
i


=



σ

g
i

2

(

l
-



2



Δ
˜



)


-
1






(
2.11
)







where {tilde over (Δ)} is the L2-extension of the Laplacian Δ that incorporates the boundary condition gi=0 on ∂Γi.


For the natural boundary condition, go∈L2o), the same covariance operator can be used, but equip {tilde over (Δ)} with zero-Neumann boundary conditions, i.e. ∂ng0=0 on ∂Γo. Lastly, for the shape of Ω, which is implicitly represented with a signed distance function ϕ± (defined in more detail below), the following norm is an appropriate choice:











1
2








ϕ
¯

±

-

ϕ
±





C

ϕ
±


2


=


1

2


σ

ϕ
±

2










ϕ
¯

±

-

ϕ
±






L
2

(
I
)

2






(
2.12
)







where σϕ±custom-character and ϕ±∈L2(I), representing an initial guess for the signed distance function. Additional regularization for the boundary of (i.e. the zero level-set of ϕ±) is needed and it is described in more detail below. Based on the above results, the regularization norm for the unknown model parameters is











(

x
,

ϕ
±


)


=



1
2






"\[LeftBracketingBar]"


v
-

v
¯




"\[RightBracketingBar]"



Σ
v

2


+


1
2







g
i

-


g
_

i





C

g
i


2


+


1
2







g
o

-


g
_

o





C

g
o


2


+


1
2








ϕ
_

±

-

ϕ
±





C

ϕ
±


2







(
2.13
)







An additional constraint to be applied to the system is that of matching physical reality. This is achieved by forming an inverse Navier-Stokes problem. This sub-process begins by testing the Navier-Stokes problem (2.1) with functions (v,q)∈H1(Ω)×L2(Ω), and after integrating by parts, the weak form is obtained:












(
Ω
)



(

u
,
p
,
v
,

q
;
x


)







Ω



(


v
·

(

u
·


u


)


+

v



v

:



u


-


(


·
v

)


p

-

q

(


·
u

)


)


+




Γ
o



(

v
·

g
o


)


+




Γ


Γ
i




v
·


(



-
v





n

u


+
pn

)



+


(

v
,
q
,

u
;

g
i



)


+


(

v
,
q
,

u
;
0


)




=
0




(
2.14
)







where custom-character is the Nitsche penalty term:











(

v
,
q
,

u
;
z


)






T



(



-
v





n

u


+

q

n

+

η

v


)

·

(

u
-
z

)







(
2.15
)







which weakly imposes the Dirichlet boundary condition z∈L2(T) on a boundary T, given a penalization constant η. The augmented reconstruction error functional is then defined as:











(
Ω
)



(

u
,
p
,
v
,

q
;
x


)


=



(
u
)



+


(

x
,

ϕ
±


)



+


(
Ω
)



(

u
,
p
,
v
,

q
;
x


)







(
2.16
)







which contains the regularization terms custom-character and the model constraint custom-character, such that u weakly satisfies (2.1). To reconstruct the measured velocity field u* and find the unknowns (Ω,x), custom-character should be minimized. This can be achieved by solving its associated Euler-Lagrange system as set out in the following.


In order to derive the Euler-Lagrange equations for custom-character, consider the initial definition:










u


=

{



u






H
1

(
Ω
)

:


u





|

Γ


Γ
i





0


}





(
2.17
)







to be the space of admissible velocity perturbations u′, and custom-character ⊂L2(Ω) to be the space of admissible pressure perturbations p′, such that −∂nu′+p′n|Γo=0. Further defining the first variation (denoted δycustom-character for the first variation of function custom-character with respect to variable y) of terms in the error functional custom-character it is seen that:













δ
u





d

d

τ



(

u
+

τ


u




)




|

τ
=
0



=




Ω



-


C

u



-
1


(


u


-
Su

)


·

Su




=




Ω



-

S







C

u



-
1


(


u


-
Su

)

·

u












D
u


,
u



Ω







(
2.18
)







Where † denotes the Hermitian adjoint of an operator, and Du represents a generalised derivative operator (with respect to u). Adding together the first variations of M with respect to (u, p) the following is derived:










δ
u





d

d

τ



(


u
+

τ


u




,



)




|

τ
=
0



,





δ
p





d

d

τ



(



,

p
+

τ


p




,



)




|

τ
=
0







and after integrating by parts:












δ
u


+


δ
p



=




Ω



(



-
u

·

(



v

+


(


v

)




)


-

v

Δ


v

+


q


)

·

u




+



Ω



(


·
v

)



p




+





Ω




(



(

u
·
n

)


v

+


(

u
·
v

)


n

+

v




n

v


-
qn

)



n
·

u





+




Γ


Γ
i




v
·

(



-
v





n


u




+


p



n


)



+





Γ



(

v
,
q
,


u


;
0


)







(
2.19
)







The integration by parts formulae for the nonlinear term in the above equation are:










Ω



(


u


·


u


)

·
v


=




Ω



u
j






j


u
i




v
i



=



-




Ω



u
j




u
i





j


v
i





+




j


u





u
i



v
i


+






Ω



u
j




n
j



u
i



v
i



=


-



Ω



(


u


·


(


v

)




)

·

u





+



Ω



(

u
·
v

)



(

n
·

u



)

















Ω



(

u
·



u




)

·
v


=




Ω



u
j





j


u
i





v
i




=






-



Ω





j


u
j




u
i




v
i




+


u
j



u
i






j


v
i



+



















Ω



u
j




u
i






n
j



V
i





=

-






Ω



(

u
·


v


)

·

u




+



Ω



(

u
·
n

)



(

v
·

u



)

















Since custom-character does not depend on (u, p), (2.18) and (2.19) can be used to assemble the optimality conditions of custom-character for (u, p):
















D
u


,

u





Ω

=
0

,







D
p


,

p





Ω

=
0





(
2.2
)







For (2.20) to hold true for all perturbations (u′, p′)∈custom-character×custom-character, it can be deduced that (v,q) must satisfy the following adjoint Navier-Stokes problem:










(
2.21
)

















u
·

(



v

+


(


v

)




)


-

v

Δv

+


q


=


-

D
u








in


Ω








·
v

=
0




in


Ω






v
=
0




on


Γ


and



Γ
i



where


an


inlet


exists









(

u
·
n

)


v

+


(

u
·
v

)


n

+

v




n

v


-
qn

=
0





on



Γ
on



where


an


outlet


exists

;







In this context, v is the adjoint velocity and q is the adjoint pressure, which both vanish when u≡u*. Note also that boundary conditions for the adjoint problem (2.21) are chosen to make the boundary terms of (2.19) vanish, and that these boundary conditions are subject to the choice of U′, which, in turn, depends on the boundary conditions of the (primal) Navier-Stokes problem.


To find the shape derivative of an integral defined in Ω, when the boundary ∂Ω deforms with speed: custom-character, Reynold's transport theorem may be used. For the bulk integral of a function ƒ: Ω→custom-character, it is calculated that:











d

d

τ




(




Ω

(
τ
)


f

)





"\[LeftBracketingBar]"




τ
=
0




=




Ω


f



+





Ω



f

(

n

)







(
2.22
)







and for the boundary integral of ƒ:












d

d

τ




(






Ω

(
τ
)



f

)



|


τ
=
0




=






Ω



f



+


(



n


+
κ


)



f

(

n

)







(
223
)







where ƒ′ is the shape derivative of ƒ (due to custom-character), κ is the summed curvature of ∂Ω, and custom-character≡ζn, with ζ∈L2(∂Ω), is the Hadamard parameterization of the speed field. In other words, the boundary deforms (i.e. is perturbed) in a direction perpendicular to the boundary at all points, with a speed ζ. Any boundary that is a subset of ∂I, i.e. the edge of the image (or imaging region) I, is non-deforming (is not perturbed) and therefore the second term of the above integrals vanishes. The only boundary that deforms is Γ∪∂Ω. For brevity, custom-character is used to denote the shape perturbation of an integral I.


Using (2.22) on custom-character:









=






D
u


,
u



Ω





(
2.24
)







where Ducustom-character is given by (2.18). Using (2.22) and (2.23) on custom-character, the shape derivatives problem for (u′, p′) is obtained:















-

u



·


u


+

u
·



u




-

v

Δ


u



+



p




=
0




in


Ω








·

u



=
0




in


Ω







u


=

-



n


u

(

·
n

)







on


Γ







u


=
0




on



Γ
i



where


an


inlet


exists









-
v





n


u




+


p



n


=
0




on



Γ
o



where


an


outlet


exists







(
2.25
)







which can be used directly to compute the velocity and pressure perturbations for a given speed field custom-character. It is observed that (u′, p′)≡0 when custom-charactern=∂≡=0. Testing the shape derivatives problem (2.25) with (v,q), and adding the appropriate Nitsche terms for the weakly enforced Dirichlet boundary conditions, the following expression is obtained:









=








Ω



(


v
·

(



u


·


u


+

u
·



u





)


+

v



v

:




u




-


(


·
v

)



p



-

q

(


·

u



)


)


+







Γ


Γ
i





v
·

(



-
v





n


u




+


p



n


)



+


(

v
,
q
,


u


;
0


)


+


(

v
,
q
,


u


;


-
ζ





n

u




)



=
0





(
2.26
)







If Ii represents the four integrals in (2.19), integrating by parts (2.26) yields:









=






i
=
1

4


I
i


+


(

v
,
q
,


u


;
0


)


+


(

v
,
q
,


u


;


-
ζ





n

u




)



=
0





(
2.27
)







and, due to the adjoint problem (2.21):









=


+






Γ




(



-
v





n

v


+
qn

)

·
ζ





n

u



=
0





(
2.28
)







since u′|Γi=0, and u|Γ≡0. Therefore, the shape perturbation of custom-character is:















,

·
n




Γ








D
ζ


,
ζ



Γ


=


+
+

=
0





(
2.29
)







which, due to (2.28) and custom-character≡0, takes the form:















D
ζ


,
ζ



Γ

=








n

u

·

(



-
v





n

v


+
qn

)


,
ζ



Γ





(
2.3
)







where Dζcustom-character is the shape gradient. Note that the shape gradient depends on the normal gradient of the (primal) velocity field and the traction, (−v∇v+qI), that the adjoint flow exerts on Γ.


The unknown model parameters {x} (that is the set of parameters {x}) have an explicit effect on custom-character and custom-character and can therefore be obtained by taking their first variations. For the Dirichlet-type boundary condition at the inlet this gives:


















D

g
i



,

g
i






Γ
i


=







v




n

v


-
qn
-

η

v

+


C

g
i


-
1


(


g
i

-


g
_

i


)


,

g
i






Γ
i








=









vC

g
i


(




n

v

-
qn
-

η

v


)

+

g
i

-


g
_

i


,

g
i






C

g
i



=















D
^


g
i



,

g
i






C

g
i










(
2.31
)







where {circumflex over (D)}gicustom-character is the steepest descent direction that corresponds to the covariance-weighted norm. For the natural boundary condition at the outlet:


















D

g
o



,

g
o






Γ
o


=






v
+


C

g
o


-
1


(


g
o

-


g
_

o


)


,

g
o






Γ
o








=









C

g
o



v

+

g
o

-


g
_

o


,

g
o






C

g
o



=







D
^


g
o



,

g
o






C

g
o











(
2.32
)







Lastly, since the kinematic viscosity is considered to be constant within Ω its generalized gradient is
















D
v


,

v







=











Ω




v

:



u


+






v

-
1




(

v
-

v
_


)



,

v






















v







Ω





v

:




u


+

(

v
-

v
_


)


,

v







v


=







D
^

v


,

v







v







(
2.33
)







For a given step size custom-character∃τ>0 the steepest descent directions (2.31) to (2.33) can be used either to update an unknown parameter x through:










x

k
+
1


=


x
k

+

τ


s
k







(
2.34
)







with sk={circumflex over (D)}xcustom-character. It is also possible to update the parameters by reconstructing an approximation {tilde over (Δ)} of the inverse Hessian matrix, in the context of a quasi-Newton method, and thereby to compute sk=−{tilde over (H)}{circumflex over (D)}xcustom-character. This latter approach is set out in more detail below.


To deform the boundary ∂Ω using the simple update formula (2.34) it is convenient to use a parametric surface representation. Several methods are possible, but here a choice is made to implicitly represent ∂Ω using signed distance functions ϕ±. The object Ω and its boundary ∂Ω are then identified with a particular function ϕ± so that:







Ω
=

{

x



Ω
:



ϕ
±

(
x
)


<
0


}


,



Ω

=

{


x


Ω
:



ϕ
±

(
x
)



=
0

}






A signed distance function ϕ± for can be obtained by solving the Eikonal equation:















"\[LeftBracketingBar]"





ϕ
±

(
x
)




"\[RightBracketingBar]"


=

1


subject


to



ϕ
±





"\[RightBracketingBar]"




Ω


=
0




(
2.35
)







One way to solve this problem is with level-set methods (see for example: “Fronts propagating with curvature-dependent speed: Algorithms based on Hamilton-Jacobi formulations” Journal of Computational Physics 79 (1), 12-49; “A fast marching level set method for monotonically advancing fronts” Proceedings of the National Academy of Sciences of the United States of America 93 (4), 1591-1595; “A level set method for inverse problems” Inverse Problems 17 (5), 1327-1355; “A framework for the construction of level set methods for shape optimization and reconstruction” Interfaces and Free Boundaries 5 (3), 301-329; “A Survey in Mathematics for Industry: A survey on level set methods for inverse problems and optimal design” European Journal of Applied Mathematics 16 (2), 263-301; “Combined state and parameter estimation in level-set methods” Journal of Computational Physics 399, 108950, arXiv: 1903.00321).


There is, however, a different approach, which relies on the heat equation (developed in “On the behavior of the fundamental solution of the heat equation with variable coefficients” Communications on Pure and Applied Mathematics 20 (2), 431-455; “Diffusion processes in a small time interval” Communications on Pure and Applied Mathematics 20 (4), 659-685; and “The heat method for distance computation” Communications of the ACM 60 (11), 90-99). The main result that to be drawn from these, in order to justify the use of the heat equation for the approximation of ϕ± states that:











d

(

x
,


Ω


)

=


lim


τ
1


0



(


-



τ
1


2



log


u

(

x
,

τ
1


)


)



,

x

I





(
2.36
)







where d (x, ∂Ω) is the Euclidean distance between any point x∈I and ∂Ω, and u is the solution of heat propagation away from an:












(

I
-

τ
1


)


u

=

0


in


I





u
=

1


on




Ω







(
2.37
)







This result has been used to implement a smoothed distance function (see e.g. “The heat method for distance computation” Communications of the ACM 60 (11), 90-99), where it was referred to as the “heat method”. Here, this method is slightly adapted to compute signed distance functions ϕ± in truncated domains. To compute ϕ±, therefore, equation (2.37) is solved for τ<<1, and then ϕ± obtained by solving:












·



ϕ
±



=


·
X








n


ϕ
±


=

X
·
n






ϕ
±

=
0




where
:




X
=


-

sgn

(
ψ
)






u




"\[LeftBracketingBar]"



u



"\[RightBracketingBar]"









(
2.38
)







with X being the normalized heat flux and ψ being a signed function such that ψ(x) is negative for points x in Ω and positive for points x outside Ω—i.e. to ensure that the extended normal vector points away from the boundary. This intermediate step (the solution of two Poisson problems (2.37)-(2.38) instead of one) is taken to ensure that |∇ϕ±(x)|=1, as required by equation 2.35. To deform the boundary ∂Ω, ϕ± is transported under the speed field custom-character≡∂n. The convection-diffusion problem for ϕ±(x, t) then reads:















t


ϕ
±


+




ϕ
±



-


ϵ

ϕ
±




Δϕ
±



=

0


in


I
×

(

0
,
τ




]





ϕ
±

=


ϕ

±
0




in


I
×

{

t
=
0

}







in


which



ϵ

ϕ
±



=






"\[LeftBracketingBar]"



"\[RightBracketingBar]"





ι


Re

ϕ
±








(
2.39
)







where ϕ±0 denotes the signed distance function of the current domain, Ω, ϵϕ± is the diffusion coefficient, l is a length scale, Reϕ± is a Reynolds number, and custom-character:I→custom-character×custom-character is an extension of custom-character:∂Ω→custom-character×custom-character. That is, custom-character extends custom-character from its usual domain only on the boundary to the entire imaging region.


When 2.39 is solved for ϕ±(x, τ) the implicit representation of the perturbed domain is obtained, at time t=τ (the step size), but to do so it is necessary to extend custom-character to the whole space of the imaging region I.


To extend custom-character to I the normal vector n and the scalar function ζ, which are both initially defined on ∂Ω, are extended across I. The normal vector extension is easily obtained by:












n
.

(
x
)

=





ϕ
±





"\[LeftBracketingBar]"




ϕ
±




"\[RightBracketingBar]"



=



ϕ
±




,

x

I





(
2.4
)







since |∇ϕ±|=1, and an outward-facing extension is given by:












n
o

.

(
x
)

=


-

sgn

(

ϕ
±

)




n
.






(
2.41
)







This extended normal vector {dot over (n)}o is then used to extend ζ∈L2(∂Ω) to {dot over (ζ)}∈L2(I), using the convection-diffusion problem:

















t


ζ
.


+



n
.

o

·



ζ
.



-


ϵ
ζ


Δ


ζ
.



=

0


in


I
×

(

0
,

τ
ζ





]





ζ
.

=

ζ


on




Ω

×

(

0
,

τ
ζ






]





ζ
.

=

0


on


I
×

{

t
=
0

}







in


which



ϵ
ζ


=






"\[LeftBracketingBar]"



n
.

o



"\[RightBracketingBar]"





ι


Re
ζ







(
2.42
)







In other words, ζ is convected along the predefined {dot over (n)}o-streamlines and isotropic diffusion is added for regularization. Note that the streamlines may have complicated behaviour and without diffusion there may be no unique solution to the problem. The choice of ϵϕ± in (2.39) and ϵζ in (2.42) has been made in order for the shape regularization to depend only on the length scale i and the Reynolds numbers Reζ, Reϕ±.


More precisely, the shape regularization depends only on Reζ, and Reϕ± because the length scale l is fixed to equal the smallest possible length scale of the modelled flow, which is the numerical grid spacing h for a uniform cartesian grid, such as used on performing finite element analyses. For illustration, if ζ is considered to be the concentration of a dye on ∂Ω, the using a simplified scaling argument similar to the growth of a boundary layer on a flat plate, it is observed that the diffusing dye at distance d from ∂Ω will extend over a width δ such that:











δ
~




ϵ
ζ


d





"\[LeftBracketingBar]"



n
.

o



"\[RightBracketingBar]"







=



ι

d


Re
ζ




,


or



δ
ι

~


α

Re
ζ





when


d

=

α

ι






(
2.43
)







The above scaling approximation describes the dissipation rate of small-scale features such as roughness away from ∂Ω. This is therefore how Reζ, and Reϕ± control the regularity of the boundary ∂Ωτ at time t=τ, which is given by (2.39). The parameter τζ can be taken to be large enough to find a steady-state for (2.42). The linear initial value problems (2.39) and (2.42) are recast into their corresponding boundary value problems using backward-Euler temporal discretization because the time dependent solution is not of interest in the present context.


The extended shape gradient (2.30), after taking into account the regularizing term for ϕ±, is therefore given by:


















D

ζ
.



,


ζ
.






I

=







ζ
.

+


C

ϕ
±


-
1


(



ϕ
_

±

-

ϕ
±


)


,


ζ
.






I







=









C

ϕ
±




ζ
.


+


ϕ
_

±

-

ϕ
±


,


ζ
.







C

ϕ
±



=







D
^


ζ
.



,


ζ
.







C

ϕ
±











(
2.44
)







where {dot over (ζ)} is the extension of the shape gradient ζ=∂nu·(−v∂nv+qn), for x on Γ.


These principles are illustrated in FIG. 2. Here, the geometric flow of ∂Ω (outline shown in FIG. 2a, i.e. the shape of ∂Ω) relies on the computation of its signed distance field ϕ± (FIG. 2b—showing the level sets of ϕ±—with black being the zero (boundary) portions, negative values are located inside the boundary, and positive values outside, values having magnitude not equal to zero are shown in greyscale) and its normal vector extension n superimposed on the magnitude of ϕ±—with portions inside an being negative, portions outside being positive (FIG. 2c).


The shape gradient ζ, which is initially defined on ∂Ω (FIG. 2d), is extended to the whole image I (the imaging region). The extension of ζ to the whole image, {dot over (ζ)}, is shown in FIGS. 2e and 2f. Shape regularization is achieved by increasing the diffusion coefficient ϵζ in order to mitigate small scale perturbations when assimilating noisy velocity fields u*. FIG. 2f shows results at a lower value of Reζ than FIG. 2e—equal to 1 in FIG. 2e and equal to 0.01 in FIG. 2f.


The inverse Navier-Stokes problem for the reconstruction and the segmentation of noisy flow velocity images u* can now be written as:










find


u

°





arg

min


Ω
,
x



(
Ω
)



(

u
,
p
,
v
,

q
;
x


)






(
2.45
)







where custom-character is given by (2.16). The above optimization problem leads to a Euler-Lagrange system whose optimality conditions have been formulated above.


To precondition the steepest descent directions (2.31)-(2.33) and (2.44), the approximated inverse Hessian {tilde over (H)} of each unknown is reconstructed using the BFGS quasi-Newton method (see Fletcher, R: Practical Methods of Optimization—pub. John Wiley & Sons) with damping. Due to the large scale of the problem, it is most convenient to work with the matrix-vector product representation of {tilde over (H)}. Consequently, the search directions are given by:









s
=


-

(





H
~


ζ
.




















H
~


g
i




















H
~


g
o




















H
~

v




)




(






D
^


ζ
.










D
^


g
i










D
^


g
o










D
^

v





)






(
2.46
)







and the unknown variables x are updated according to (2.34). The signed distance function ϕ± is perturbed according to (2.39), with custom-character≡−({tilde over (H)}{dot over (ζ)}{circumflex over (D)}{dot over (ζ)}custom-character){dot over (n)}


Each line search is started with a global step size τ=1, and the step size is halved until custom-character((ϕ±,x)k+1)<custom-character((ϕ±,x)k), where k is an indexing parameter which indicates how many iterations of the procedure have been completed. To update the flow field uk to uk+1 the Oseen problem for the updated parameters is solved:













u
k

·




u

k
+
1




+

v

Δ


u

k
+
1



+




p

k
+
1




=
0






·

u



=
0





(
2.47
)







with the boundary conditions given by (2.1). The process terminates when certain predetermined conditions are met. For example, the process may terminate in some cases if the covariance-weighted norm for the perturbations of the model parameters is below a user-specified tolerance. In other cases, the procedure may terminate if the line search fails to reduce custom-character on a subsequent iteration.


In algorithmic format, the procedure set out above for reconstruction and segmentation of noisy flow velocity images may be written:


Input: u*, initial guesses for the unknowns (Ω0; x0), regularization parameters controlling the form and weighting of the penalty terms.

    • begin










k

0






(

ϕ
±

)

k



signed


distance


field






(

equations

2.37

and

2.38

)














(

u
,
p

)

k



Navier
-
Stokes


problem


for




(


ϕ
±

,
x

)

k






(

equation

2.1

)









    • while convergence criterion is not met do:















(

v
,
q

)



adjoint


Navier
-
Stokes


problem


with



u
k




(

equation

2.21

)










(

u
,
p

)

k



Navier
-
Stokes


problem


for




(


ϕ
±

,
x

)

k




(

equation

2.1

)











D
^


(
·
)





Steepest


ascent


directions



(


equations

2.31

-

2.33

and

2.44


)








s
,

τ


search


directions


and


step
-
size



(

equation

2.46

)















(


ϕ
±

,
x

)


k
+
1




perturb



ϕ
±



and


model


parameters


,






x

(

equations

2.39

and

2.34

)














(

u
,
p

)


k
+
1




linearised


Navier
-
Stokes


problem


for









(


ϕ
±

,
x

)


k
+
1




(

equation

2.47

)










k


k
+
1










(


u

°

,

p

°


)




(

u
,
p

)

k






(


Ω

°

,

x

°


)




(


ϕ
±

,
x

)

k






Output: reconstruction (u, p) and inferred model parameters (Ω, x).


From the above formulation, the reconstructed inverse Hessian {tilde over (H)} can be used to provide estimates for the uncertainties of the model parameters. To simplify the description, let x denote an unknown parameter distributed according to custom-character(xk, Ck). The linear approximation to the data u* is given by:











u
*

=


𝒵

x

+
ε


,

ε
~

N

(

0
,

C
k


)






(
2.48
)







where u=custom-characterx, and where custom-character is the operator that encodes the linearized Navier-Stokes problem around the solution uk. To solve (2.45), x is updated as follows:











x

k
+
1


=


x
k

+

C


𝒵





C

u
*


-
1


(


u
*

-

𝒵


x
k



)




,


with


C

=


(



𝒵




C

u
*


-
1



𝒵

+

C
x

-
1



)


-
1







(
2.49
)







where custom-character is the operator that encodes the adjoint Navier-Stokes problem, and C is the posterior covariance operator. It can be shown that:









C
=



(



𝒵




C

u
*


-
1



𝒵

+

C
x

-
1



)


-
1


=




(



C
x



𝒵




C

u
*


-
1



𝒵

+
I

)


-
1




C
x






H
~

x



C
x








(
2.5
)







where {tilde over (H)}x is the reconstructed inverse Hessian for x. Note that {tilde over (H)} by itself approximates (Cxcustom-characterCu−1custom-character+I)−1, and not C, because prior-preconditioned gradients are used (i.e. steepest ascent directions {circumflex over (D)}(⋅)custom-character), instead of the gradients D(⋅)custom-character, in the BFGS formula. Therefore, if C≡{tilde over (H)}xCx is the approximated covariance matrix, then samples xk+1s from the posterior distribution can be drawn using the Karhunen-Loève expansion:











x

k
+
1

s

=


x
k

+



k



η
k




λ
k




φ
k





,

with



η
k



𝒩

(

0
,
1

)






(
2.51
)







where (λ, ϕ)k is the eigenvalue/eigenvector pair of {tilde over (C)}. The variance of xk+1 can then be directly computed from the samples.


The above boundary value problems can be solved numerically. In this case immersed boundary finite element method is used. In particular, the fictitious domain cut-cell finite element method (FEM), (introduced in “La pénalisation fantôme” Comptes Rendus Mathematique 348 (21-22), 1217-1220 and “Γictitious domain finite element methods using cut elements: II. A stabilized Nitsche method” Applied Numerical Mathematics 62 (4), 328-341) is implemented for the Poisson problem, and extended (in “A Stabilized Nitsche Fictitious Domain Method for the Stokes Problem” Journal of Scientific Computing 61 (3), 604-628, arXiv: 1206.1933; and “A stabilized Nitsche cut finite element method for the Oseen problem” Computer Methods in Applied Mechanics and Engineering 328, 262-300, arXiv: 1611.02895) to the Stokes and the Oseen problems.



custom-character
h is defined to be a tessellation of I produced by square cells (pixels) K∈custom-character, having sides of length h. Also defined is the set of cut-cells custom-character consisting of the cells that are cut by the boundary ∂Ω, and custom-character is the set of cells that are found inside Ω and which remain intact (not cut)—see lower image in FIG. 1. It is assumed that the boundary ∂Ω is well-resolved, i.e. custom-character∂Ω/h>>1 where custom-character∂Ω is the smallest length scale of ∂Ω (i.e. that the grid size is such that ∂Ω can be accurately represented with negligible loss of detail).


The discretized space is generated by assigning a bilinear quadrilateral finite element Qf to every cell K. To compute the integrals a standard Gaussian quadrature procedure is used for cells K∈custom-character, while for cut-cells custom-character, where integration must be considered only for the intersection K∩custom-characterΩ, the approach in “Fast and Accurate Computation of Polyhedral Mass Properties” Journal of Graphics Tools 1 (2), 31-50 may be used, which relies on the divergence theorem and simply replaces the integral over K∩Ω with an integral over ∂(K∩Ω).


The boundary integral on ∂(K∩Ω) is then easily computed using one-dimensional Gaussian quadrature. Since an inf-sup unstable finite element pair (Qf-Qf) is used, a pressure-stabilizing Petrov-Galerkin formulation is also used, including ∇-div stabilization for preconditioning.


To solve the Navier-Stokes problem fixed-point iteration (Oseen linearization) is used and the coupled system is solved using the Schur complement; with an iterative solver (LGMRES) for the outer loops, and a direct sparse solver (UMFPACK) for the inner loops. The immersed FEM solver, and all the necessary numerical operations of the algorithm may be implemented in Python, using its standard libraries for scientific computing, namely SciPy and NumPy. Computationally intensive functions are accelerated using Numba and CuPy.


Examples

In this section results are presented of reconstruction and segmentation of noisy flow images by solving the inverse Navier-Stokes problem (2.45) using the algorithms described above. The reconstructed velocity field is then used to estimate the wall shear rate on the reconstructed boundary.


The signal-to-noise ratio (SNR) of the ux* image is defined as:











SNR
x

=


μ
x


σ

u
x





,


μ
x

=


1



"\[LeftBracketingBar]"


Ω




"\[RightBracketingBar]"








Ω






"\[LeftBracketingBar]"


u
x




"\[RightBracketingBar]"









(
3.1
)







where σu*x is the standard deviation, Ω is the ground truth domain, |Ω| is the volume of this domain, and |ux| is the magnitude of the ground truth x-velocity component in Ω. Also defined is the component-wise averaged, noise relative reconstruction error custom-characterx, and the total relative reconstruction error custom-character by:












log

(


1



"\[LeftBracketingBar]"

Ω


"\[RightBracketingBar]"







Ω





"\[LeftBracketingBar]"



u
x


-

Su
x
°




"\[RightBracketingBar]"



σ

u
x






)



and









u


-

Su

°






L
1

(
I
)






u






L
1

(
I
)







(
3.2
)







Similar measures also apply for the uy* image (and also for the z-dimension when 3D data is considered).


The volumetric flow rate Q, the cross-section area at the inlet A, and the diameter at the inlet D are all defined in setting up the problem. The Reynolds number is based on the reference velocity U=Q/A, and the reference length D.


Example 1: Synthetic Data for 2D Flow in a Converging Channel

Testing the algorithm is initially performed on a flow through a symmetric converging channel having a taper ratio of 0.67. To generate synthetic 2D Navier-Stokes data the Navier-Stokes problem (2.1) is solved for a parabolic inlet velocity profile (gi), zero-traction boundary conditions at the outlet (go=0), and Re≅534, in order to obtain the ground truth velocity u. The synthetic data u* is then generated by corrupting the components of u with white Gaussian noise such that SNRx=SNRy=3.


This test case only tries to infer Ω and gi. Note that, in the present method, the initial guess x0 of an unknown x equals the mean of its prior distribution x, i.e. x0=x. The algorithm is initiated using bad initial guesses (high uncertainty in priors) for both the unknown parameters (see Table 1). The initial guess for Ω, labelled Ω0, is a rectangular domain with height equal to 0.7D, centred in the image domain. For gi0 a parabolic velocity profile is used with a peak velocity of approximately 2 U that fits the inlet of 0. For comparison, gi has a peak velocity of 1.5 U, while it is also defined on a different domain, namely Ω. The algorithm manages to reconstruct and segment the noisy flow images in thirty-nine iterations, with total reconstruction error custom-character≅1.44%.


The results are presented in FIG. 3, which illustrates reconstruction and segmentation of synthetic images depicting the flow (from left to right) in a converging channel. FIGS. 3a-3b and 3d-3e show the horizontal, ux, and vertical, uy, velocities respectively. The FIGS. 3a and 3b show positive flow, while the upper stripe in FIGS. 3d and 3e has the opposite sign (negative) to the lower stripe (which is positive). FIGS. 3c and 3f show the discrepancy between the noisy velocity data and the reconstruction (a measure of the noise and artefacts which have been filtered out by the procedure). FIG. 3g depicts the reconstructed boundary ∂Ω, the ground truth boundary ∂Ω. In this view the reconstructed boundary and the ground truth boundary lie almost entirely on top of one another, with small deviations towards the left edge of the Figure. FIG. 3g also shows a 2σ confidence region computed from the approximated posterior covariance custom-character≡{tilde over (H)}{dot over (ζ)}Cϕ± which appears like a slight broadening the reconstructed boundary in this view. FIG. 3h shows the error as a function of iteration number. Velocity slices are drawn for ten equidistant cross-sections (labelled with the letters A to J) for both the reconstructed image (FIG. 3i) and the ground truth (FIG. 3j).


It can be seen that the inverse Navier-Stokes problem performs very well in filtering the noise (u*−Su) (see in particular FIGS. 3c and 3f), providing effectively noiseless images for each component of the velocity as shown in FIGS. 3b and 3e. As expected, the discrepancies σu*−1 (u*−Su) shown in FIGS. 3c and 3f consist mainly of Gaussian white noise, except at the corners of the outlet (FIG. 3f), where there is a weak correlation. For a more detailed presentation of the denoising effect, FIGS. 3i and 3j plot slices of the reconstructed velocity and the ground truth velocity respectively. The light grey background lines represent the noisy input data, while the darker grey foreground lines represent the reconstruction.


Having obtained the reconstructed velocity u, the wall shear rate γw is computable on the reconstructed boundary ∂Ω, which is compared with the ground truth γwin FIG. 5a. Using the upper ∂Ω+ and lower ∂Ω limits of the 2σ confidence region for γw (as in FIG. 3g) a confidence region for γw can be estimated; although this has to be interpreted carefully. Note that, for example, ∂Ω+ and ∂Ω can be smoother than the mean ∂Ω, and, therefore, ∂Ω may be found outside this confidence region. A better estimate of the confidence region could be obtained by sampling the posterior distribution of ∂Ω in order to solve a Navier-Stokes problem for each sample ∂Ωk and find the distribution of γw. Since the latter approach would be computationally intensive, only the estimate is provided here, which requires the solution of only two Navier-Stokes problems.


Example 2: Synthetic Data for 2D Flow in a Simulated Blood Vessel

Next, the algorithm is tested in a channel that resembles the cross-section of a small abdominal aortic aneurysm, with Dmax/D≅1.5, where Dmax is the maximum diameter at the midsection. Synthetic images are generated for u* as in section 3.1, again for SNRx=SNRy=3, but now for Re=153. The ground truth domain Ωhas horizontal symmetry but the inlet velocity profile deliberately breaks this symmetry. The inverse problem is the same as that in Example 1 but with different input parameters (see Table 1). The initial guess Ω0 is a rectangular domain with height equal to 0:85D, centred in the image domain. For gi0 a skewed parabolic velocity profile is used with a peak velocity of approximately 2 U that fits the inlet of Ω0. The algorithm manages to reconstruct and segment the noisy flow images in thirty-nine iterations, with total reconstruction error E≅2.87%.













TABLE 1








Image
Model





dimension
dimension
σu*x/U
σu*y/U





Converging
192 × 192
200 × 200
3.97 × 10−1
8.97 × 10−3


channel (2D)






Simulated blood
192 × 192
200 × 200
2.80 × 10−1
5.26 × 10−3


vessel (2D)
















Regularisation
σϕ±/D
σgi/U
σv/UD
Reϕ±
Reζ

custom-character
h






Converging
1.0
2.0

0.025
0.025
3


channel (2D)








Simulated blood
1.0
2.0

0.1
0.1
3


vessel (2D)










Input Parameters for the Inverse 2D Navier-Stokes Problem with Synthetic Data.


The results are presented in FIG. 4, which illustrates reconstruction and segmentation of synthetic images depicting the flow (from left to right) in a simulated blood vessel as in Example 2. FIGS. 4a, 4b and 4d, 4e show the horizontal, ux, and vertical, uy, velocities respectively. Once more the horizontal velocity field is positive, while the contiguous central portion of the vertical field is the opposite sign (negative) to the upper left and lower right portions (which are positive).


The discrepancy can be seen to consist mainly of Gaussian white noise (FIGS. 4c and 4f). Again, some correlations are visible in the discrepancy of the y-velocity component at the upper inlet corner and the upper boundary of the simulated blood vessel. FIG. 4g depicts the reconstructed boundary ∂Ω, the ground truth boundary an ∂Ω. In this view the reconstructed boundary and the ground truth boundary lie almost entirely on top of one another. FIG. 4g also shows a 2σ confidence region computed from the approximated posterior covariance custom-character≡{tilde over (H)}{dot over (ζ)}Cϕ± which provides a reasonably wide envelope around ∂Ω towards the centre of the lower part of the boundary. The latter correlations (FIG. 4f) can be explained by the associated uncertainty in the predicted shape ∂Ω (FIG. 4g), which is well estimated for the upper boundary but slightly underestimated for the upper inlet corner. It is interesting to note that the upward skewed velocity profile at the inlet creates a region of low velocity magnitude on the lower boundary. The velocity profiles in this region produce low wall shear stresses, as seen in FIGS. 4i, 4j, and 5b. These conditions are particularly challenging when one tries to infer the true boundary ∂Ωbecause the local SNR is low (SNR<<1), meaning that there is considerable information loss there. Despite the above difficulties, the algorithm manages to approximate the posterior distribution of ∂Ω well, and successfully predicts extra uncertainty in this region (FIG. 4g).


As before, FIG. 4h shows the error as a function of iteration number. Velocity slices are drawn for ten equidistant cross-sections (labelled with the letters A to J) for both the reconstructed image (FIG. 4i) and the ground truth (FIG. 4j). FIGS. 4i and 4j plot slices of the reconstructed velocity and the ground truth velocity respectively. The light grey background lines represent the noisy input data, while the darker grey foreground lines represent the reconstruction.


Using the reconstructions u and ∂Ω the wall shear rate is computed and compared with the ground truth in FIG. 5b. It can be observed that the reconstructed solution approximates the ground truth well, even for very low signal to noise ratios (SNR=3). Note that the waviness of the ground truth γwis due to the relatively poor resolution of the level set function that was intentionally used to implicitly define this domain.



FIG. 5 compares the derived wall shear rate γw=τ·∂nu where τ is the unit tangent vector of ∂Ω for the two test cases with synthetic data—(a) the converging channel (Example 1); and (b) the simulated blood vessel (Example 2). The wall shear stress is found by multiplying this by the viscosity. The reconstructed wall shear rate (γw) is calculated on ∂Ω and for u, while the ground truth (γw) is calculated on ∂Ωand for u. The light grey region is bounded by the two wall shear rate distributions for u, calculated on the upper (∂Ω+) and lower (∂Ω) limits of the 2σ confidence region of ∂Ω. Note that the reconstructed solution can sometimes be found outside the light grey region because the reconstructed shape ∂Ω may be less regular than ∂Ω+ or ∂Ω.


Magnetic Resonance Velocimetry Experimental Set Up

The flow through a converging nozzle was measured using magnetic resonance velocimetry (MRV). The nozzle converges from an inner diameter of 25 mm to an inner diameter of 13 mm, over a length of 40 mm (FIG. 6b). On either side of the converging section, the entrance to-exit length equals 10 times the local diameter (FIG. 6b) in order to ensure the absence of entrance/exit effects. Velocity images were acquired for a Reynolds number of 162 (defined at the nozzle outlet). A 40 wt % glycerol in water solution was used as the working fluid in order to increase the viscosity and minimize the effect of thermal convection in the resulting velocity field due to the temperature difference between the magnet bore and the working fluid. The nozzle is made of polyoxymethylene to minimize magnetic susceptibility differences between the nozzle wall and the working fluid.



FIG. 6a depicts the schematic of the flow loop of the MRV experiment. A 20 L holding tank 1 stores a reservoir of the water/glycerol solution. Clamp valves 4 are disposed about the flow circuit to control the flow. To pump the water/glycerol solution a peristaltic pump 2 (Watson Marlow 505S (Watson Marlow, Falmouth UK)) was used with a 2 L dampening vessel 3 at its outlet to dampen flow oscillations introduced by the peristaltic pump 2. To make the flow uniform, porous polyethylene distributor plates 5 were installed (SPC technologies, Fakenham UK) at the entrance and the exit of the nozzle.


The MRV system comprises radiofrequency probe 6 and a 4.7 T superconducting magnet 8. Within the bore of the magnet 8 a converging nozzle 7 (FIG. 6b provides a detailed sketch of the nozzle 7) is located and supplied with the water/glycerol solution, which flows through the nozzle. The MRV system images the flow through the nozzle 7 as described elsewhere in this document, and the results of the imaging are processed in accordance with the methods described herein. A volumetric cylinder 9 is provided for flow measurements.


The velocity images were acquired on a Bruker Spectrospin DMX200 with a 4.7 T superconducting magnet, which is equipped with a gradient set providing magnetic field gradients of a maximum strength of 13.1 Gcm−1 in three orthogonal directions, and a birdcage radiofrequency coil tuned to a 1H frequency of 199.7 MHz with a diameter and a length of 6.3 cm. To acquire 2D velocity images slice-selective spin-echo imaging was used combined with pulsed gradient spin-echo (PGSE) for motion encoding (see FIG. 6c). Each of the three orthogonal velocity components were measured in a 1 mm thick transverse slice through the converging section of the nozzle, which is centered along the nozzle centerline.


The flow images acquired have a field of view of 84.2×28.6 mm at 512×128 pixels, giving an in-plane resolution of 165×223 μm. For velocity measurements in the net flow direction, a gradient pulse duration, δ, of 0.3 to 0.5 ms and flow observation times, Δ, of 9 to 12 ms were used. For velocity measurements in the perpendicular to the net flow direction, an increased gradient pulse duration, δ, of 1.0 ms and an increased observation time, Δ, of 25 to 30 ms were used, due to the lower velocity magnitudes in this direction. The amplitude, g, of the flow encoding gradient pulses was set to ±3 Gcm−1 for the direction parallel to the net flow and to +1.5 Gcm−1 for the direction perpendicular to the net flow, in order to maximize phase contrast whilst avoiding velocity aliasing by phase wrapping. To obtain an image for each velocity component, the phase difference between two images acquired with flow encoding gradients having equal magnitude g but opposite signs was taken. To remove any phase shift contributions that are not caused by the flow, the measured phase shift of each voxel was corrected by subtracting the phase shift measured under zero-flow conditions. The gradient stabilization time used is 1 ms and the signal was acquired with a sweep width of 100 KHz. Hard 90° excitation pulses with a duration of 85 μs and a 512 μs Gaussian-shaped soft 180° pulse were used for slice selection and spin-echo refocusing. FIG. 6c shows the spin-echo sequence in detail, including slice selective refocusing and flow encoding.


It was found that the T1 relaxation time of the glycerol solution was 702 ms, as measured by an inversion recovery pulse sequence. To allow for magnetization recovery between the acquisitions, a repetition time of 1.0 s was used. To eliminate unwanted coherences and common signal artefacts, such as DC offset, a four step phase cycle was used.


To be consistent with the standard definition used in MRI/MRV, the SNR of each MRV image is defined using (3.1), but with μx replaced by the mean signal intensity (images of the 1H spin density) over the nozzle domain (μI), and σu*x replaced by the standard deviation of the Rayleigh distributed noise in a region with no signal (σI). The standard deviation for the phase is therefore σφ=1/SNR. The MRV images are acquired by taking the sum/difference of four phase images, and then multiplying by the constant factor ½γgδΔ, where γ is the gyromagnetic ratio of 1H (linear relation between the image phase and the velocity).


The error in the MRV measured velocity is therefore σuφ/γgδΔ. To acquire high SNR images (FIG. 7), 32 scans were averaged, resulting in a total acquisition time of 137 minutes per velocity image (˜4.6 hours for both velocity components). To evaluate the denoising capability of the algorithm poor SNR images were also acquired by averaging only 4 scans (the minimum requirement for a full phase cycle) and decreasing the repetition time to 300 ms, resulting in a total acquisition time of 5.1 minutes per velocity image (10.2 minutes for both velocity components).


To verify the quantitative nature of the MRV experiment the volumetric flow rates calculated from the MRV images (using 2D slice-selective velocity imaging in planes normal to the direction of net flow) were compared with the volumetric flow rates measured from the pump outlet. The results agree with an average error of ±1.8%.


Example 3: Magnetic Resonance Velocimetry Data in a Converging Nozzle

The algorithm may also be used to reconstruct and segment the low SNR images (u*) that were acquired during the MRV experiment described above, and they are then compared them with the high SNR images of the same flow (uin FIG. 7). The flow is axisymmetric with zero swirl.


The subscript ‘x’ is replaced by ‘z’, which denotes the axial component of velocity, and the subscript ‘y’ is replaced by ‘r’, which denotes the radial component of velocity. The low SNR images (SNRz=6.7, SNRr=5.8) required a total scanning time of 5.1 minutes per velocity image (axial and radial components), and the high SNR images (SNRz=44.2, SNRr=34.4) required a total scanning time of 137 minutes per velocity image. FIG. 7 shows high SNR images in the r and z directions. (average of 32 scans with SNRz≅44; SNRr≅34) for the flow through the converging nozzle (units in [cm/s]).


Since the signal intensity of an MRV experiment corresponds to the 1H spin density, the spin density image is segmented using a thresholding algorithm in order to obtain a mask ψ, such that ψ=1 inside Ω (the nozzle) and ψ=0 outside Ω. ψ is considered to be the prior information for the geometry of the nozzle, which also serves as an initial guess for Ω (Ω0). For gi0 a parabolic velocity profile is taken, having a peak velocity of 0.6 U, where U≅5 cm/s is the characteristic velocity for this problem.


In this case the kinematic viscosity is treated as an unknown, with a prior distribution custom-character(v,(0.1v)2), and v=4×10−6 m2/s. Note that the axis of the nozzle is not precisely known beforehand, and since it is only necessary to solve an axisymmetic Navier-Stokes problem on the z-r half-plane, an unknown variable is also introduced for the vertical position of the axis.


The axisymmetric Navier-Stokes problem is:









u
·


u


-

v

Δ

u

+


p

+
f

=
0

,



·
u

=
0







where
:







u
=



u
z



z
^


+


u
r



r
^




,



u

=

(




z

u

,



r

u


)


,


Δ

u

=




z
2


u
z


+



r
2


u
r


+


1
r





r


u
r














·
u

=




z


u
z


+



r


u
r


+


u
r

r



,

f
=

(

0
,


vu
r


r
2



)






for unit vectors {circumflex over (z)} and {circumflex over (r)} in the axial and radial directions respectively. The nonlinear term u·∇u retains the same form as in the Cartesian frame.


In order to compare the axisymmetric modelled velocity field with the MRV images, two new operators are introduced: i) the reflection operator custom-character: custom-character+×custom-charactercustom-character×custom-character and ii) a rigid transformation custom-character: custom-character2×custom-character2. The reconstruction error is then expressed by:









(
u
)





1
2







u


-

𝒮𝒯ℛ

u





C

u



2



:=


1
2





I



(


u


-

𝒮𝒯ℛ

u


)




C

u



-
1


(


u


-

𝒮𝒯ℛ

u


)


dxdy







An unknown variable is introduced for the vertical position of the axisymmetry axis by letting custom-characteru=u(x,y+y0), for y0 being constant. Then, the generalized gradient for y0 is:











D

y
0




y
0







=





-



I




C

u



-
1


(


u


-

𝒮𝒯ℛ

u


)



(


u


-

𝒮𝒯ℛ




y

u



)




,

y
0











and y0 is treated in the same way as the inverse Navier-Stokes problem unknowns {x}.


Using the input parameters of Table 2, the algorithm manages to reconstruct the noisy velocity image and reduce segmentation errors in just six iterations, with total reconstruction error E≅5.94%


The results for the low SNR MRV images are presented in FIG. 8, showing axial už in FIG. 8a, ur* in FIG. 8d, the axial and radial reconstruction velocity fields in FIGS. 8b and 8e respectively and the axial and radial discrepancy terms σu*z−1 (uz*−Suz) and σu*z−1(uz*−Suz) in FIGS. 8c and 8f.


As with similar other figures, FIG. 8g shows the velocity magnitude |u| and shape ∂Ω (with a 2σ confidence interval. FIG. 8h shows the reconstruction error history and FIGS. 8i and 8j show respectively low and high SNR data (light grey background) and reconstruction (dark grey foreground). Similar comments apply as for FIG. 3, but noting that FIG. 8 shows the magnetic resonance velocimetry images of the axisymmetric flow (from left to right) in a converging nozzle. The reconstructed flow u is axisymmetric by construction, therefore uz depicts an even reflection and ur depicts an odd reflection, so that they can be compared with the MRV images.


It can be seen that the algorithm manages to filter out the noise, the outliers, and the acquisition artefacts of the low SNR MRV images depicting the axial uz* (FIG. 8a) and the radial ur* (FIG. 8d) component of velocity. A notable difference between these real MRV images and the synthetic MRV images in Examples 1 and 2, is that the real MRV images display artefacts and contain outliers. The MRV images have not been pre-processed for example by removing outliers. The estimated posterior uncertainty of ∂Ω is depicted in FIG. 8g, in which it can be observed that regions with gaps in the data coincide with regions of higher uncertainty.


Although the kinematic viscosity v is treated as an unknown parameter, the posterior distribution of v remains effectively unchanged. More precisely, a kinematic viscosity is inferred of v=3.995×10−6v, with a posterior variance of (0.1005v)2. This is because a Bayesian approach is used in approaching this inverse problem, where the prior information for v is already rich enough. Technically, the reconstruction functional custom-character is insensitive to small changes of v (or 1/Re), and, as a result, the prior term in the gradient of v (equation (2.33)) dominates; i.e. the model custom-character is not informative. Physically, it is not possible to infer v (with reasonable certainty) for this particular flow without additional information on pressure.



FIG. 9 shows the reconstructed wall shear rate γw, derived from MRV images of axisymmetric flow in a converging nozzle, computed for the reconstructed velocity field u on the segmented shape ∂Ω, and compares it with the ground truth wall shear rate γw, computed for the high SNR velocity field u (see FIG. 7) on the high SNR shape ∂Ω(1H spin density). It is seen that the ground truth wall shear rate is particularly noisy, as MRV suffers from low resolution and partial volume effects near the boundaries ∂Ω. It is clear that the lighter grey background (high SNR) performs very poorly in extracting this parameter, due to averaging over many datasets. The darker grey foreground line (with pale grey confidence interval) performs much better, because of the physical realism requirements imposed by the procedure. The wall shear rate may be of particular use in designing fluid transport components (in fields ranging from medical devices, e.g. stents, to petrochemical or other chemical engineering contexts), or in determining whether to proceed with a repair operation (e.g. whether surgery is necessary or replacement of a faulty component should occur) since the shear rate is an important factor in determining whether a boundary will rupture under given flow conditions. The confidence interval is again useful here, in allowing the user to make a determination based on the data, secure in the knowledge that they have the appropriate amount of trust in their determination.


Certainly, it is possible to smooth the boundary ∂Ω using conventional image processing algorithms. However, the velocity field u will not be consistent with the new smoothed boundary (the no-slip boundary condition will not be satisfied). The method proposed here for the reconstruction and segmentation of MRV images tackles exactly this problem: it infers the most likely shape of the boundary (∂Ω) from the velocity field itself, without requiring an additional experiment (e.g. CT, MRA) or manual segmentation using another software.


Furthermore, in this Bayesian setting it is possible to use the 1H spin density to introduce a priori knowledge of ∂Ω in the form of a prior, which would prove useful in areas of low velocity magnitudes where the velocity field itself does not provide enough information in order to segment the boundaries. As a result, the algorithm performs very well in estimating the posterior distribution of wall shear rate, a quantity which depends both on the velocity field and the boundary shape, and which is hard to measure otherwise.













TABLE 2








Image
Model





dimension
dimension
σu*x/U
σu*y/U





Nozzle (3D)
255 × 128
300 × 130
1.4168 × 10−1
3.0679 × 10−2
















Regularisation
σϕ±/D
σgi/U
σv/UD
Reϕ±
Reζ

custom-character
h






Nozzle (3D)
0.25
0.5
6.2 × 10−4
0.025
0.025
3









Input Parameters for the Inverse 3D

Regularization is crucial in order to successfully reconstruct the velocity field and segment the geometry of the nozzle in the presence of noise, artefacts, and outliers. Regularization comes from the Navier-Stokes problems (primal and adjoint) (custom-character) and the regularization of the model parameters (custom-character). For example, overfitting of the shape ∂Ω is avoided by choosing the Reynolds numbers for the geometric flow to be Reϕ±=Reζ=0.025. Increasing these Reynolds numbers to around 1.0, it is seen that the assimilated boundary becomes more susceptible to noise in the image. From numerical experiments it has been observed that typical successful values for the Reynolds numbers Reϕ±, Reζ lie in the interval (0.01, 0.1) for low SNR images (SNR<10), in (0.1; 1.0) for higher SNR images (SNR>10), and above 1.0 for geometries with regions of high curvature.


In summary, the procedure described above may be thought of as a series of process steps, as captured in the flow chart 100 depicted in FIG. 10, the process starts at step 101, in which MRV data encoding information relating to a fluid velocity field, u*, are received. A subset of the MRV data forms a region, Ω, enclosed by a boundary, ∂Ω. The boundary includes at least a physical boundary portion, Γ. As an example, the boundary optionally further includes an inlet portion, Γi, and/or an outlet portion, Γ0. The region Ω is a subset of an imaging region, I, over which the MRV data encodes information.


Next, at step 102, initial values for a set of unknown parameters, {x}, are received. The set of unknown parameters includes at least a shape of the boundary, ∂Ω, within which the fluid is located; and a kinematic viscosity of the fluid, v. Optionally, the initial values for parameters may also include an inlet boundary condition for the fluid, gi where an inlet portion exists; and/or an outlet boundary condition for the fluid, go where an outlet portion exists;


At step 103 a model fluid velocity field, uo, is calculated, using the initial values as inputs to a Navier-Stokes problem. This problem is then solved for the velocity field within and at the boundary.


In order to optimise the input parameters, an error functional, custom-character, is constructed in step 104. This error functional has the form:






custom-character=custom-character+custom-character+custom-character


in which custom-character is a penalty term which penalises deviations in uo from the received MRV data; custom-character is a penalty term which penalises outcomes for in uo which arise from statistically unlikely values for the parameters in the set of unknown parameters, {x}, and custom-character is a penalty term which penalises deviations in uo from the Navier Stokes problem.


At step 105, a generalised gradient of custom-character with respect to each of the unknown parameters in the set {x} is determined. These generalised gradients are then used in step 106 to determine a direction in which to perturb each parameter, by evaluating each generalised gradient at the initial value of its respective parameter. The generalised gradients allow the determination of a direction of perturbation of that parameter which will lead to a reduction in the magnitude of custom-character The initial value for each parameter is then updated to an improved parameter by taking a step in the direction determined by the evaluation of the generalised gradient.


From this point, an improved velocity field can be extracted. At step 107, the MRV data is reconstructed by outputting the improved values from step 106 and calculating the reconstructed velocity, u, across at least the region Ω, based on the improved values. As noted above, the procedure may be an iterative one in the sense that some of the steps may be repeated multiple times to converge on the optimal set of values such that the parameters minimise custom-character as best they can.


Reconstruction of Sparse k-Space Magnetic Resonance Velocimetry Data


Moving on to an extension of the methods set out above, in which the MRV data is supplied as sparse MRV data. It has been realised by the inventors that the Navier-Stokes equation contains much more information than purely mathematical variational methods. This permits reconstruction and/or segmentation of even sparser signals.


MRV data may be supplied in a sparse format for many reasons, but the primary motivation for developing an algorithm to handle sparse MRV data is to allow a reduction in MRV signal acquisition time and thereby speed up the overall time to acquire high quality, low noise images.


The discussion above assumes that the MRV samples have already been processed to extract the flow (i.e. velocity field) information. This is typically performed by acquiring signals, s, in frequency space (sometimes referred to as k-space), and taking the inverse Fourier transform of s, custom-character−1 s. The magnitude (|custom-character−1 s|) and phase (arg(custom-character−1 s)) of the result are then used to extract information on the density and velocity of fluids in the imaging region.


To extract velocity at least 4 signals (4 individual scans) are required, each signal acquired with a specific magnetic gradient pulse. The velocity image along a direction is then recovered by combining 4 phase images, i.e. arg(custom-character−1 si) for i=1, 2, 2 4.


Consequently, to measure a 2D velocity field at least 8 scans are needed, and for a 3D velocity field at least 12 scans are required. When k-space is fully-sampled there is a unique correspondence between the signal s and the measured velocity field u*, then the methodology presented above is generally preferable because it is simpler. However, when k-space is sparsely-sampled there is not a unique correspondence between s and u*, because s is not measured at every point of the frequency space (there are gaps in k-space). One option is to fill these gaps with zeros, and then take the inverse Fourier transform, but this introduces artefacts and produces images of low quality.


A comparison between zero-filling and reconstruction is shown schematically in FIG. 11. As can be seen, rather than simply accepting the absence of information inherent in the missing k-space information, a reconstruction process similar to that described above is used to infer the missing k-space data. As will be seen below, where the reconstruction makes use of physical reality (via the Navier-Stokes equations) and statistical likelihood (via Bayesian inference), the reconstruction of missing k-space data is possible with a good degree of accuracy.


In order to obtain reconstructed images of similar quality to the method set out above, the functional to be minimised is changed to the following form:






=

+
+
+
+





in which the custom-character and custom-character terms have the same form as set out above, the custom-character term is a velocity regularisation term, which penalises non-compliance between the measured data uk* and the modelled data uk, and has the form:






=


1
2






k
=
1

d







u
k
*

-

Su
k





C

u
k


2







where S is the projection from model space to data space, k is an index indicating the physical direction of velocity which is being regularised, d is the number of physical dimensions in which reconstruction is to occur, and Cuk is an appropriate covariance operator for generating a norm in a covariance-weighted L2 space (denoted | . . . ∥C for a covariance operator C). The effect of the covariance operator is to introduce Gaussian noise into this penalty term.


Similarly, the custom-character′ error term helps to ensure consistency in between the k-space sampling and the modelled data, and has the form:






=


1

2

n


σ
2








j
=
1


4

d








s
j

-


(


ρ
j



e

i


φ
j




)








2

(


n

)

2







where n is the number of samples (and is less than the dimension of frequency space, m in which samples are taken, which defines the “sparse sampling” condition, in other words n<<m), j is an index representing each data set, d is the number of physical dimensions in which reconstruction is to occur,







1

2

n

σ







...






2

(


n

)






represents a Euclidean norm in the space of complex numbers, custom-character, representing a multivariate normal distribution, and in which σ is the variance of that distribution, custom-character is a sparse sampling operator mapping reconstructed data back to the sparse raw data space, custom-character is the Fourier transform operator, ρj is the reconstructed magnitude of the jth data set, e is Euler's constant, i=√{square root over (−1)} and φj is the reconstructed phase of the jth data set.


Finally, the custom-character regularisation term is a function of two level segmentation constants αj, βj for the Chan-Vese algorithm, corresponding to a given magnitude image, ρj, in other words






custom-character=custom-characterjjj).


In the above, uk* is the measured velocity in direction, k, given from the phase information from each of 12 scans (in general, for 3D information), as follows:







u
1
*

=


c
1

(


φ
1

-

φ
2

-

φ
3

+

φ
4


)








u
2
*

=


c
2

(


φ
5

-

φ
6

-

φ
7

+

φ
8


)








u
3
*

=


c
3

(


φ
9

-

φ
10

-

φ
11

+

φ
12


)





for known constants cj. Clearly, where only 1 or 2 dimensions are required, only the first or second lines need be considered and only 4 or 8 scans are required.


The two additional terms custom-character′ and custom-character introduce 4×4 additional unknown parameters to solve for each physical dimension being imaged: αj, βj, ρj, σj for each of four scans (j) per dimension.


In further examples, prior information from the nuclear spin density (magnitude signal) is incorporated into the procedure to derive an improved boundary shape/location. To exploit information about ∂Ω from the nuclear spin density images an energy-based segmentation functional custom-character, is used, which assumes that the image consists of two regions with approximately uniform magnitudes. The functional is given by:








(

ρ
,

ϕ
±

,
α
,
β

)





1

8

d







j
=
1


4

d





1

σ

ρ
j

2




(






(


ρ
j

-
α

)



𝒮ℋ

(

ϕ
±

)






L
2

(

I
ω

)

2

+





(


ρ
j

-
β

)



(


𝒮ℋ

(

ϕ
±

)

-
1

)






L
2

(

I
ω

)

2


)








where the mean value of the average magnitude inside Ω is α∈custom-character and outside Ω is β∈custom-character. In addition, σρj2 is the standard deviation of noise in the magnitude image, and custom-character denotes the (modified) Heaviside function such that custom-character(t)=1 if t<0 and is equal to 0 otherwise. Taking into account the prior contribution, the combined functional is given by:








ρ


(

ρ
,

ϕ
±

,
α
,
β

)




+


1
2






j
=
1


4

d
















In


which


=



σ

ρ
j

2

(

C
*
·

)

.





Therefore, to reduce signal acquisition time further, we can obtain more accurate spin density (magnitude) images separately (without encoding velocity) and use them as priors for an ∂Ω(φ±). Similarly, we can obtain d−1 dimensional scans for the inlet velocity boundary condition, and use them as priors for gi. Conventional MRI or PC-MRI cannot measure go and v, but v is usually known with high certainty compared to the other parameters.


An algorithm to capture the general methods set out herein can be written as follows:

    • Input: Sparse MRV signals, s, initial guesses for the unknowns ((ρ0)j, (φ0)j, Ω0; x0), regularization parameters controlling the form and weighting of the penalty terms.
      • begin
      • k←0
      • uk*←measured velocity from φ0
      • ±)k←signed distance field from p0
      • (u,p)k←Navier-Stokes problem for (ϕ±,x)k
      • while convergence criterion is not met do:














begin


k ← 0


u*x - measured velocity from φ0


±)k ← signed distance field from po


(u, p)k ← Navier-Stokes problem for (Φ±, x)k


while convergence criterion is not met do:







embedded image







(u °, p°) ← (u, p)k


(Ω°,x°) ← (Φ±, x)k


Output: reconstruction (u°, p°) and inferred model parameters (Ω°, x°).











    • Output: reconstruction (u, p) and inferred model parameters (Ω, x).





Example 4: Reconstruction of MRV Data which is Sparsely Sampled in k-Space


FIG. 12 shows an example reconstruction for a sparsely-sampled MRV signal for the flow through a converging nozzle (as above) using the above method. In FIG. 2, axisymmetric flow is from left to right). Here the sparse sampling results in the number of samples, n, being equal to 0.1 of the dimension of k-space, m, i.e. n=m/10. In other words, 90% of the information is missing from the k-space input.


In line with the above images (see e.g. FIG. 8 for a comparison of the same experiment derived from fully sampled k-space data), the axisymmetric flow need only be shown in 2D, i.e. axial, z, and radial, r, directions. FIGS. 12a and 12c show respectively the axial and radial reconstructions where zero-filling of the k-space data has been implemented. Here very clear artefacts and velocity aliasing may be seen due to phase wrapping. These manifest in each Figure as large areas of uniform apparent velocity, typically at the extrema of the velocity scale.


By contrast, the application of the full reconstruction method set out above results in FIGS. 12b and 12d, showing respectively the axial and radial reconstructions under the extension of the method to sparse k-space signals. A clear improvement is seen, in that not only are the artefacts and aliasing removed, but it is apparent that the spare k-space information has not resulted in the determination of the boundary location changing too much, nor in velocity fields differing greatly from their fully sampled counterparts. The ground truth and inferred (plus confidence interval) boundary locations lie largely on top of one another towards the left end of the images, and deviate slightly towards the right hand end.


This illustrates the power in using a robust physical and statistical model to infer the parameters from noisy and incomplete input data, since non-physical features of the velocity images are removed, providing noiseless velocity fields that satisfy the fluid flow (Navier-Stokes) equations and the no-slip condition on the boundary (walls).


In summary, the procedure described above may be thought of as a series of process steps, as captured in the flow chart 200 depicted in FIG. 13. The process of reconstruction of magnetic resonance velocimetry starts at step 201, in which raw MRV data are received in the form of a series of incomplete frequency space data sets, sj, encoding information relating to a fluid velocity field, u*, within a subset of the MRV data forming a region, Ω, enclosed by a boundary, ∂Ω. As above, the boundary includes at least a physical boundary portion, Γ, and optionally further includes an inlet portion, Γi, and/or an outlet portion, Γo. The region Ω is a subset of an imaging region, I, over which the MRV data encodes information.


At step 202, initial values for a set of unknown parameters, {x}, are received, which include at least: a shape of the boundary, ∂Ω, within which the fluid is located and a kinematic viscosity of the fluid, v. Optionally an inlet boundary condition for the fluid, gi is also supplied, where an inlet portion exists, and/or an outlet boundary condition for the fluid, go where an outlet portion exists may be supplied. Initial values for the density and phase of the MRV data and for the two parameters for the two-level Chan-Vese segmentation algorithm may also be supplied or extracted from the MRV data, for use in the procedure at this stage.


At step 203, a model fluid velocity field, uo, is calculated using the initial values as inputs to a Navier-Stokes problem, which is then solved for the velocity field within and at the boundary and calculating the measured velocity field, u*, by performing an inverse Fourier transform on each sj, custom-character−1 s and deriving u* from the arguments of custom-character−1 sj;


At step 204, an error functional, custom-character, is constructed, of the form:






=

+
+
+
+





in which custom-character is a penalty term which penalises deviations in uo from the received MRV data; custom-character h is a penalty term which penalises outcomes for in uo which arise from statistically unlikely values for the parameters in the set of unknown parameters, {x}, custom-character is a penalty term which penalises deviations in uo from the Navier Stokes problem, custom-characteris a segmentation term for partitioning the raw data and in which custom-character is a penalty term which penalises deviations in uo from the calculated measured velocity field u*;



custom-character may have the form:






=


1
2






k
=
1

d







u
k
*

-

Su
k





C

u
k


2







where S is the projection from model space to data space, k is an index indicating the physical direction of velocity which is being regularised, d is the number of physical dimensions in which reconstruction is to occur, and Cuk is an appropriate covariance operator for generating a norm in a covariance-weighted L2 space (denoted ∥ . . . ∥C for a covariance operator C). The effect of the covariance operator is to introduce Gaussian noise into this penalty term.


Similarly, the custom-character penalty term may have the form:






=


1

2

n


σ
2










j
=
1


4

d








s
j

-


(


ρ
j



e

i


φ
j




)








2

(


n

)

2






where n is the number of samples (and is less than the dimension of frequency space in which samples are taken, which defines the “sparse sampling” condition) j is an index representing each data set, d is the number of physical dimensions in which reconstruction is to occur,







1

2

n

σ














2

(


n

)






represents a Euclidean norm in the space of complex numbers representing a multivariate normal distribution, and in which σ is the variance of that distribution, custom-character is a sparse sampling operator mapping reconstructed data back to the sparse raw data space, custom-character is the Fourier transform operator, ρj is the reconstructed magnitude of the jth data set, e is Euler's constant, i=√{square root over (−1)} and φj is the reconstructed phase of the jth data set. Step 205 includes determining a generalised gradient of custom-character with respect to each of the unknown parameters in the set {x}.


At step 206, for each of the unknown parameters in the set {x}, using the generalised gradient for that parameter, evaluated at the initial value of all of the parameters, a direction in which perturbing each parameter reduces the magnitude of custom-character is determined. From this, improved values for each of the unknown parameters in the set {x} can be determined.


Finally at step 207, the MRV data are reconstructed by outputting the improved values from step 206 and calculating the reconstructed velocity, u, across at least the region Ω and reconstructing a phase and magnitude of the received raw data from the reconstructed velocity field, u.


The sampling density can be further decreased (for fixed reconstruction error) if the a priori information on {x} becomes more accurate. The present application also discloses the finding that when the Navier-Stokes problem is an accurate and descriptive model, only the boundary ∂Ω, the boundary conditions gi, go, and the kinematic viscosity are needed in order to find the velocity field. The interplay between the full flow field and these parameters is illustrated in FIG. 14. It is clear in FIG. 14 for example that the full flow field across a 2D region (left part of the Figure) is described by an underlying 1D structure encoded in the parameters {x}. A transform Z is used to convert between the two representations, and it can be seen from the methods described above that Z corresponds to solving a Navier-Stokes problem (to decompress {x} into a full flow field) or to solving an inverse Navier-Stokes problem (to compress the measured or modelled flow field into the parameters {x}). That is to say that Images of nd voxels depicting the d velocity components can be compressed/decompressed by solving an inverse/forward Navier-Stokes problem.


This approach to physics-informed compressed sensing extends the standard notion of sparsity used in conventional compressed sensing methods to a more general notion of a structure, which is dictated by the N-S problem. Instead of enforcing sparsity during the nonlinear optimization process, a sparse (hidden) structure of the velocity field is recovered by enforcing the Navier-Stokes problem as a constraint. This is what FIG. 1 illustrates, showing that the velocity exhibits an underlying low-dimensional structure (specifically a (d−1) dimensional structure encodes a d-dimensional flow field) that is encoded in the unknown parameters {x}. Based on this, a velocity field can be compressed/decompressed by solving an inverse/forward Navier-Stokes problem. No other work comes close to this. Typical prior methods, for example, take a conventional sparsity-promoting and physics-informed velocity regularization method (see e.g. I. Bright, G. Lin, and J. N. Kutz, “Compressive sensing based machine learning strategy for characterizing the flow around a cylinder with limited pressure measurements,” Physics of Fluids, vol. 25, no. 12, 2013.). These approaches rely on a pre-existing database (library) containing the dominant eigenmodes of a Navier-Stokes problem that is defined in a pre-set geometry, with pre-set inlet boundary condition, and so necessarily cannot adapt to unknown geometries in the manner which the present application does.


This approach leads to the compression and decompression algorithms described herein and illustrated generally in FIGS. 15 and 16 respectively. The compression algorithm 300 of FIG. 15 begins at step 301 with a receiving step in which raw data encoding the information relating to a fluid velocity field is received. As noted elsewhere, this may be the MRV data inputs discussed elsewhere herein, but in fact a core finding of the present work is that any suitably high-definition flow field can be compressed in the manner described herein. The raw data which are received in this step encode information in an imaging region I relating to a fluid velocity field, u*, within a subset of the imaging region I forming a region, Ω, enclosed by a boundary, ∂Ω, the boundary including at least a physical boundary portion, Γ, and optionally further including an inlet portion, Γi, and/or an outlet portion, Γo.


At step 302, the method proceeds to extract from the data a modelled flow field which includes the set of parameters {x}, including the shape of the boundary, ∂Ω, enclosing the region Ω, the kinematic viscosity, v, of the fluid and optionally the inlet and outlet boundary conditions, gi, go, respectively where an inlet and or outlet is present in the system being modelled.


The compressed form of the modelled flow field is then provided in step 303 in which the compressed form is represented by the shape of the boundary, the kinematic viscosity, and the inlet and/or outlet boundary conditions where an inlet and/or outlet exists respectively. This form of the modelled flow field requires much less storage capacity and/or transmission bandwidth to store or transmit. In particular the full flow field may be discarded once the compressed form has been provided, and the compressed version may be locally stored for later use, or transmitted elsewhere.


Turning now to FIG. 16, which shows a decompression method 350. This begins at step 351 in which the compressed flow data are received. As noted above, the compressed flow field data include the parameters {x}, which are a shape of a boundary, ∂Ω, of a region, Ω, in which the flow field is defined, a kinematic viscosity, v, of the fluid, and an inlet boundary condition, gi, and/or an outlet boundary condition, go, where an inlet and/or outlet exists.


At step 352, the parameters {x} are supplied as inputs to a Navier-Stokes problem. Finally at step 353, the Navier Stokes problem is solved, thereby calculating the modelled flow field consistent with the inputs (i.e. parameters {x}) and the Navier-Stokes equations.


Note that the decompression protocols in FIG. 16 may follow the compression protocol in FIG. 15, interleaved by at least one of a transmission and/or a storage step, such that the decompression is applied to the compressed form following the compression of the flow field, with the decompression occurring at a later time and/or a different location.


Another way to further decrease the sampling density is briefly discussed now. In the examples above, the phase space is only sparsely sampled. In order to provide a good test of the theory, fully sampled k-space signals are used to provide a baseline for comparison. Then the fully sampled signals are passed through a sparse sampling filter to restrict the received information and simulate the effect of an incomplete signal being received.


There is a choice in the design of the sparse sampling filter, and the results in applying each can be compared to one another. These results may then be used to design sparse sampling patterns which provide the best results for a given sampling density. For example, the inventors have tested two probabilistic sampling patterns in k-space, a 1D and 2D Gaussian distribution.


Using this approach it has been found that the 2D Gaussian distribution sampled at 15% coverage of the k-space region produces slightly better results than the 1D Gaussian distribution generated at 25% coverage. Note that the 2D case adequately covers the centre of k-space, while the 1D case leaves gaps. This can be corrected by fully-sampling the central region of k-space and then sparsely-sampling regions that are not as important. To identify the important regions, the algorithm set out herein can be used to backpropagate the errors from the velocity images to the k-space. In general, this approach can be used to identify important k-space regions and tailor the raw data acquisition to capture information in those regions, leaving the less important regions with only sparse k-space information.


Any of the methods set out herein may be implemented on a computer. As shown in FIG. 17, an example system 400 for implementing the method includes a receiving module 406 for receiving data and, for example, for receiving parameter initial values and processing parameters controlling the application and weighting of the various optimisation routines. An input module 404 for receiving user input is also supplied, which can allow a user to adjust parameters and to control the operation of the system 400. Processor 402 is provided to perform the calculations. Although not shown, the processor has access to memory for storing input values, intermediate values, output values, MRV data, computer code comprising instructions for enacting the method, and so forth. An output module 408 is provided for delivering the data derived from the process set out above.


The processor 402 is provided for performing the method steps set out in detail above. The processor 402 may for example have software, firmware, application specific integrated circuits, etc. which contain instructions which, when run on the processor 402, cause the method steps to be executed. In some cases, the software may be provided separately on a physical or other tangible medium, for running on a general purpose computer, to provide the method set out above, and derive the advantage of fast processing of MRV data to achieve high quality, low noise MRV data.


Dashed lines indicate optional features. In this case, the output module 408 may feed into either the receiving module 406 or the processor 402, in order to provide the iterative improvements to the model parameters discussed above.


Also optional is the presence of a magnetic resonance imaging (MRI) or magnetic resonance velocimetry (MRV) system 410. This MRI/MRV imaging system 410 may feed data directly into the processor 402, or indirectly into the process via input module 404 or receiving module 406. In any event, the MRV data is supplied to the process from the MRI/MRV imaging system 410 in either a raw (possibly incomplete) k-space format—see Example 4, or in the form of a raw (unprocessed) velocity field—see Example 3.


Note that although shown as separate modules, more than one of the modules may form part of a single entity capable of fulfilling multiple roles simultaneously. As a particular example of this, the MRI/MRV imaging system may include the processor 402, input module 404, receiving module 406, and/or output module 408. That is to say the invention extends to an MRI/MRV imaging system which includes the processing capabilities discussed above to reconstruct the raw image data and thereby provide a single system capable of outputting high quality, low-noise MRV image data in accordance with the present disclosure.

Claims
  • 1. A computer implemented method for reconstruction of magnetic resonance velocimetry, MRV, data, the method comprising the steps of: (a) receiving MRV data encoding information relating to a fluid velocity field, u*, within a subset of the MRV data forming a region, Ω, enclosed by a boundary, ∂Ω, the boundary including at least a physical boundary portion, Γ, and optionally further including an inlet portion, Γi, and/or an outlet portion, Γo; and wherein the region Ω is a subset of an imaging region, I, over which the MRV data encodes information;(b) receiving initial values for a set of unknown parameters, {x}, including at least:a shape of the boundary, ∂Ω, within which the fluid is located;a kinematic viscosity of the fluid, v; and optionallyan inlet boundary condition for the fluid, gi where an inlet portion exists; and/oran outlet boundary condition for the fluid, go where an outlet portion exists;(c) calculating a model fluid velocity field, uo, using the initial values as inputs to a Navier-Stokes problem, and solving for the velocity field within and at the boundary;(d) constructing an error functional, J, of the form:
  • 2. The method of claim 1, wherein the Navier-Stokes problem constrains u such that u satisfies:
  • 3. The method of claim 1, wherein the M penalty term includes Nitsche penalty terms for weakly imposing a Dirichlet boundary condition on at least a portion of the boundary, ∂Ω=Γ∪Γi∪Γo.
  • 4. The method of claim 1, wherein the error functional, J, is minimised using Euler-Lagrange equations, constrained to spaces of admissible perturbations in u and p, denoted u′ and p′ respectively, where p is the reduced pressure, p0/ρ, where ρ is a density of the fluid and p0 is the pressure.
  • 5. The method of claim 4, wherein the Euler-Lagrange equations are formed by considering the first variations of the M and E penalty terms with respect to {x}, u and p.
  • 6. The method of claim 5, wherein first variations of the M and E penalty terms provide an adjoint Navier-Stokes problem:
  • 7. (canceled)
  • 8. (canceled)
  • 9. The method of claim 1, wherein the calculation of the R penalty term includes assuming that some or all of the unknown parameters follows a Gaussian distribution of likelihood and improbable realisations are penalised by forming R as a linear sum of an error functional term for each unknown parameter in the set {x} which is assumed to have a Gaussian distribution of the form:
  • 10. The method of claim 9, wherein: the covariance operator for the boundary condition at the inlet, Cgi, in cases where an inlet exists, is:
  • 11. The method of claim 1, in which the shape of the boundary is perturbed under a speed field V=ζn, in which ζ represents the magnitude of the speed at which points on the boundary are perturbed and n is the unit vector normal to each point on the boundary.
  • 12. (canceled)
  • 13. The method of claim 11, wherein the perturbation of the shape of the boundary is used to derive a shape derivatives problem defined over the permissible perturbations of u and p, denoted u′ and p′, respectively, the shape derivatives problem having the form:
  • 14. The method of claim 11, wherein the boundary is represented implicitly by signed distance functions, φ± defined across the imaging region, I, in which the boundary is represented by parts of φ± which are equal to zero, and the magnitude of φ± at a given point represents a shortest distance between that point and the boundary.
  • 15.-20. (canceled)
  • 21. The method of claim 1, wherein the reconstructed velocity, u, is updated by solving an Oseen problem for the improved parameters:
  • 22.-25. (canceled)
  • 26. The method of claim 1, further comprising outputting values of p, the pressure field, and the improved values for the unknown parameters in the set {x} as part of step (g).
  • 27. (canceled)
  • 28. The method of claim 1, wherein the received MRV data is raw data in the form of a series of incomplete frequency space data sets, sj; wherein: the method includes the step of calculating the measured velocity field, u*, by performing an inverse Fourier transform on each sj, F−1 s and deriving u* from the arguments of F−1 sj;the construction of the error functional modifies the E penalty term to:
  • 29. The method of claim 1, wherein a magnitude value of magnetic resonance imaging data is used as an input to provide an initial value for the shape of the boundary, ∂Ω.
  • 30.-31. (canceled)
  • 32. The method of claim 1, further comprising a step of providing a compressed form of the modelled flow field, the compressed form being represented by the shape of the boundary, the kinematic viscosity, and the inlet and/or outlet boundary conditions where an inlet and/or outlet exists.
  • 33. (canceled)
  • 34. The method of claim 32, further comprising decompressing the compressed form by: using ∂Ω, v, gi and go as inputs to a Navier-Stokes problem; andcalculating the modelled flow field by solving the Navier Stokes problem to derive a flow field consistent with the inputs and the Navier-Stokes equations.
  • 35. An apparatus for reconstruction of magnetic resonance velocimetry, MRV, data, the apparatus comprising: a processor configured to perform the steps of claim 1.
  • 36.-52. (canceled)
  • 53. The apparatus of claim 35 further comprising a magnetic resonance imaging system, arranged to provide Magnetic Resonance Velocimetry and/or Magnetic Resonance Imaging data.
  • 54. A non-transient computer readable medium comprising instructions which cause a computer to enact the method steps of claim 1.
Priority Claims (2)
Number Date Country Kind
2111141.4 Aug 2021 GB national
2207201.1 May 2022 GB national
PCT Information
Filing Document Filing Date Country Kind
PCT/GB2022/052027 8/2/2022 WO