Mesh Neural Networks for SE(3)-Equivariant Hemodynamics Estimation on the Artery Wall

Computational fluid dynamics (CFD) is a valuable asset for patient-specific cardiovascular-disease diagnosis and prognosis, but its high computational demands hamper its adoption in practice. Machine-learning methods that estimate blood flow in individual patients could accelerate or replace CFD simulation to overcome these limitations. In this work, we consider the estimation of vector-valued quantities on the wall of three-dimensional geometric artery models. We employ group equivariant graph convolution in an end-to-end SE(3)-equivariant neural network that operates directly on triangular surface meshes and makes efficient use of training data. We run experiments on a large dataset of synthetic coronary arteries and find that our method estimates directional wall shear stress (WSS) with an approximation error of 7.6% and normalised mean absolute error (NMAE) of 0.4% while up to two orders of magnitude faster than CFD. Furthermore, we show that our method is powerful enough to accurately predict transient, vector-valued WSS over the cardiac cycle while conditioned on a range of different inflow boundary conditions. These results demonstrate the potential of our proposed method as a plugin replacement for CFD in the personalised prediction of hemodynamic vector and scalar fields.

While CFD has a strong potential as an in-silico replacement for in-vivo measurement of hemodynamic fields (Peper, Schaap, Kelder, Grobbee, Leiner and Swaans, 2021), it also has some practical drawbacks.High-quality CFD simulations require fine discretisation of the spatial and temporal domains, leading to long computation times (Taylor et al., 2013).The time-intensive nature of high-fidelity CFD simulations limits their applicability in, e.g., virtual surgery planning or shape optimisation of medical devices (Marsden, 2013).There is a practical need for fast but accurate estimation of hemodynamics.Recent works have shown that there is great potential in deep neural networks in cardiovascular biomechanics modelling (Arzani, Wang, Sacks and Shadden, 2022).One application of neural networks in hemodynamics modelling is the use of physics-informed neural networks (PINNs), in which a neural network is optimised to represent the desired hemodynamic field of a patient under physical constraints (Arzani, Wang and D'Souza, 2021;Raissi, Yazdani and Karniadakis, 2020).However, PINNs and their graph-based variants (Gao, Zahr and Wang, 2022;Shukla, Xu, Trask and Karniadakis, 2022) do not naturally generalise to other patients and require perinstance optimisation.This can be as time-consuming as CFD, where multiple systems of equations have to be solved online for each new artery.In contrast, we follow the approach of fast, generalising surrogate models.The core idea behind these models is that the time-consuming computation is moved offline while hemodynamics estimation online is fast.Training data can be generated using high-accuracy CFD simulations and then used to optimize a neural network that, once trained, estimates hemodynamics in a new artery in a single forward pass through the network, leading to significant speed-up.
Machine learning methods for hemodynamic parameter estimation can be subdivided into three categories.A first category is formed by parameterisation and projection methods that re-parameterise or project the 2D artery-wall manifold from 3D to a Cartesian 1D or 2D domain and operate on this domain.This category includes approaches which use multilayer perceptrons (MLP) to estimate (scalar) fractional flow reserve (FFR) along the artery centerline based on shape descriptors (Itu, Rapaka, Passerini, Georgescu, Schwemmer, Schoebinger, Flohr, Sharma and Comaniciu, 2016), use convolutional neural networks (CNN) to estimate (scalar) WSS magnitude based on uniform shape sampling (Su, Zhang, Zou, Ghista, Le and Chin, 2020), use CNNs to estimate (scalar) time-averaged and transient WSS magnitude based on a cylindrical parametrisation of the vessel wall (Gharleghi, Samarasinghe, Sowmya and Beier, 2020;Gharleghi, Sowmya and Beier, 2022), or use CNNs to estimate vector-valued WSS based on a cylindrical parameterisation plus uniformly sampled projections of the velocity field at several distances from the artery wall of the aorta reconstructed from 4D flow MRI (Ferdian, Dubowitz, Mauger, Wang and Young, 2022).Parameterisation and projection methods have the disadvantage that they cannot necessarily be adapted to more complex artery shapes and might fail in cases with severe pathology (e.g.aneurysms).
A second category is formed by 3D point-cloud methods that use MLPs on points representing the native geometry of the artery.Point-cloud methods have been widely used for classification, detection, and segmentation tasks (Guo, Wang, Hu, Liu, Liu and Bennamoun, 2021;Qi, Yi, Su and Guibas, 2017).In hemodynamic field estimation, they have been used to estimate pressure and vector-valued velocity fields on 3D point clouds (Liang, Mao and Sun, 2020), and estimate vector-valued hemodynamic fields (Li, Wang, Zhang, Tupin, Qiao, Liu, Ohta and Anzai, 2021).Even though point-cloud methods excel at learning spatial relations from geometric data, they disregard an important part of information that is available in surface representations of arteries: the surface connectivity and curvature.
A third category of approaches exploits mesh-based methods that use graph convolutional network (GCN) architectures and incorporate information on artery-wall structure.Mesh-based approaches incorporate additional local geometry information from the mesh in addition to the point coordinates.For example, Morales Ferez et al. (Ferez, Mill, Juhl, Acebes, Iriart, Legghe, Cochet, de Backer, Paulsen and Camara, 2021) used the surface normal vector and connectivity to construct input features to a GCN predicting (scalar) endothelial cell activation potential on the left atrial appendage surface.A shortcoming of this approach is that the network predictions depend on the embedding of the mesh vertex normals in 3D Euclidean space, but the quantity of interest only depends on the intrinsic shape of the mesh.Thus, predictions are sensitive to orientation of the input and shape alignment is required.
In this work, we propose a mesh-based approach that processes signals intrinsically on the artery wall (Fig. 1) while handling meshes with variable numbers of vertices and connectivity.The proposed method is informed by mesh properties and does not depend on the embedding of local geometry descriptors in 3D.Instead, it is invariant to translations and equivariant to rotations of the mesh.This means that vector-valued quantities like WSS rotate with the artery wall.This is data-efficient, as a single training sample covers all possible rotations and shifts of that artery and no data augmentation is required during training.Furthermore, our method is informed by anisotropic spatial interactions on the mesh, giving our filters high expressive capacity.
A preliminary version of this method was presented in (Suk, de Haan, Lippe, Brune and Wolterink, 2022), where we estimated steady-flow WSS with fixed boundary conditions.However, temporally multi-directional WSS acts as clinical biomarker for coronary plaque development (Hoogendoorn, Kok, Hartman, de Nisco, Casadonte, Chiastra, Coenen, Korteland, van der Heiden, Gijsen, Duncker, van der Steen and Wentzel, 2020) and different patients have distinct coronary blood flow which influences the WSS.Here, we substantially extend our method to also estimate pulsatile-flow WSS and to adapt its estimation based on a given boundary condition.We present results indicating that our GCN can perform some mild extrapolation beyond boundary conditions contained in the training data.Furthermore, we formally prove the empirical result that our method is end-to-end equivariant under rotation and translation, provide thorough experimental analysis on the influence of receptive field and sensitivity to remeshing, and include additional baseline experiments.

Data
We propose a general method for hemodynamic field estimation on artery walls and demonstrate its value in coronary arteries, which are a key application domain for CFD.

Artery geometry synthesis
We synthesise two distinct classes of representative 3D models with different topology (Fig. 2) for training and validation of our GCN.The first class consists of idealised, single-outlet arteries with stenoses at random locations.The second class consists of bifurcating arteries and is used to demonstrate the versatility of our method for more complex geometries as may be encountered in real-life.

Single arteries
Emulating the shapes used in (Su et al., 2020), we generate synthetic coronary arteries with a single inlet and a single outlet (Fig. 2).The artery centerline is defined by control points spaced at fixed increments along the horizontal axis and random uniform increments along the vertical axis in a fixed 2D plane embedded in 3D.The resulting 3D models are symmetric to that plane.We assume that the lumen contour is circular and sample its base radius  from a uniform distribution  ∼  (1.25, 2.0) mm, roughly corresponding to (Su et al., 2020).We randomly introduce up to two stenoses which consist of a randomly determined narrowing of up to 50 % of the diameter, asymmetrically distributed between the top and the bottom vessel wall.The generated lumen contours are then lofted to create a watertight polygon mesh.The mesh is refined proportionally to the vessel radius along the artery centerline to give flow-critical regions finer spatial resolution for fluid simulation.Analogously to (Su et al., 2020), we add flow extensions to the inlet and outlet, whose length is five times the vessel diameter.These flow extensions are only used during simulation and later removed when simulation data is used to train and validate the deep learning model.The shape synthesis is implemented using SimVascular (Lan, Updegrove, Wilson, Maher, Shadden and Marsden, 2018).

Bifurcating arteries
We construct the bifurcating artery models using an atlas of coronary shape statistics (Medrano-Gracia, Ormiston, Webster, Beier, Young, Ellis, Wang, Smedby and Cowan, 2016;Medrano-Gracia, Ormiston, Webster, Beier, Ellis, Wang, Smedby, Young and Cowan, 2017).In the left main coronary bifurcation, the proximal main vessel (PMV) splits up into distal main vessel (DMV) and side branch (SB).The bifurcation can be fully described by the angles  between centerlines of the branches DMV and SB and  ′ between the bisecting line of the bifurcation and the centerline of SB (Fig. 2).We sample angles and lumen diameters from the atlas and use them to construct lumen contours.Appendix B provides a detailed overview of this process.In particular, these shapes are not symmetric to any plane and crosssections are elliptical.Subsequently, the generated lumen contours are lofted to create a solid polygon model, merged, and meshed.After blending of the bifurcation region to produce a more natural transition, the final surface mesh is created in a refining meshing step.The entire shape synthesis is implemented with the SimVascular Python shell.

Blood-flow simulation
For each triangular surface mesh, a tetrahedral volume mesh is created with five tetrahedral boundary layers (Fig. 2).We simulate steady and pulsatile blood flow in these meshes using the SimVascular solver for the three-dimensional, incompressible Navier-Stokes equations where  ∶ Ω → ℝ 3 is the fluid velocity and  ∶ Ω → ℝ is the pressure in the spatial domain Ω of the artery.Dynamic viscosity and blood density are assumed to be  = 0.04 g cm⋅s and  = 1.06 g cm 3 , respectively.We model the blood vessel as rigid and apply a no-slip boundary condition, i.e. the velocity is zero at the lumen wall at all times.The inlet velocity profile is uniform for the single arteries with the flow extensions enabling development of more realistic flow in the relevant region.To accelerate CFD simulations in the more complex and detailed bifurcating arteries, we omit flow extensions and use a parabolic profile.The inlet velocity follows a pulsatile waveform, scaled so that the coronary blood flow agrees with measurements in female and male patients (myocardial perfusion (Patel, Bui, Kirkeeide and Gould, 2016) times myocardial mass (Corradi, Maestri, Callegari, Pastori, Goldoni, Luong and Bordi, 2004) with mean velocity  mean across steady and pulsatile flow and maximum vessel radius  max for both the single and bifurcating artery class.For the single arteries Re ≈ 70 and for the bifurcating arteries Re ≈ 90, suggesting laminar flow in both cases.The WSS, which we denote as , is defined as the force exerted on the lumen wall Ω by the blood flow in tangential direction and can be computed from the resulting velocity field near the lumen wall.It linearly depends on fluid velocity , assuming blood to be a Newtonian fluid: |⟂⃗  where  Ω denotes the tangent bundle of Ω, J  the Jacobian of , ⃗  ∶ Ω → ℝ 3 the unit surface normal on the lumen wall and ⋅| ⟂⃗  the perpendicular projection to ⃗ .The single-artery surface meshes have around 8,000 vertices and 17,000 triangular faces and the bifurcating artery surfaces meshes have around 17,000 vertices and 32,000 triangular faces.For an individual artery, steadyflow simulations take 10 to 24 min on an Intel Xeon Gold 5218 (16 cores, 22 MB cache, 2.3 GHz) and pulsatile-flow simulations take up to 1.6 h parallelised over 128 threads on a high-performance computing cluster.The resulting steadyflow datasets contain simulations for 2000 single arteries as well as 2000 bifurcating arteries.In addition, we create a dataset of pulsatile-flow simulations in 731 new single arteries which are generate independently of the steady-flow case.Note that the boundary conditions are fixed across samples and thus inherently encoded in these datasets.Therefore, we also generate pulsatile-flow datasets with varying boundary conditions, containing 187 and 117 unique geometric models for single and bifurcating arteries, respectively.In this set, simulations for each artery are run with five random-uniform coronary blood flow values from the interval [1.87, 4.36] ml s .These values represent the average flow rate over the cardiac cycle.Like Beier, Ormiston, Webster, Cater, Norris, Medrano-Gracia, Young and Cowan (2016), we work with a standard inflow profile.We obtain the pulsatile waveform for each value by multiplying the template waveform (Fig. 3) with a linear factor.Determining this linear factor requires solving a nonlinear equation.We run additional simulations with two values from [0.63, 1.87] ml s and [4.36, 5.61] ml s , respectively, for 19 single arteries.In total, our simulation data  (Beier et al., 2016).We linearly scale this waveform for the simulations with varying (average) coronary blood flow boundary condition.
encompasses 5,035 CFD simulations with a total runtime of ca.2800 h.

Learning on 3D surface meshes
We propose a neural network that can estimate hemodynamic fields in the data described in Sec. 2. At the core of our approach is the assumption that hemodynamics, in the laminar regime, depend in good approximation on local artery-wall curvature, flow direction, and flow boundary conditions.As is common in CFD (see also Sec. 2), we represent the artery wall as a triangular surface mesh.Let Ω ⊂ ℝ 3 be the arterial lumen and Ω its 2-dimensional boundary, the artery wall.The surface mesh  is a discretisation of Ω that can be fully described by a tuple of vertices and faces  = (,  ).We use the same mesh  from the CFD simulation to construct input features to a GCN which in turn outputs a scalar or vector for each vertex in the mesh, making use of local spatial interactions on the mesh  (Fig. 1).

Network architecture
We propose a mesh-based GCN that takes as input a scalar or vector field of features mapped to the vertices  in ∶  → ℝ  in and outputs scalar or vector-valued predictions  out ∶  → ℝ  out mapped to the same vertices.Fig. 4 visualises the network architecture used in our experiments.The GCN is composed of convolution and pooling layers.To enable the flow of long-range information across the manifold Ω, we opt for an encoder-decoder architecture with three pooling levels and "copy & concatenate" connections between corresponding layers in the contracting and expanding pathway.To prevent vanishing gradients, we use residual blocks consisting of two convolution layers and a skip connection.We use ReLU activation functions and employ batch normalisation before each activation.

Convolution layer
We define signals  ∶  → ℝ  with channel size .For ease of notation, we compactly denote the set of all fields mapping from  to ℝ  as (, ℝ  ) so that we can write  ∈ (, ℝ  ).As a central building block of our neural network, we define convolution layers on  via message passing (Gilmer, Schoenholz, Riley, Vinyals and Dahl, 2017).Let   and  +1 denote the channel size before and after the layer.
The messages  aggregate information from the neighbourhood B  () ∩ , where B  () consists of all vertices that are contained in a ball with radius  around  ∈ .The update function  creates the signal update from these messages.Alternatively, the neighbourhood could be defined by a 1ring neighbourhood on the mesh  or by a geodesic ball on the manifold Ω.Our definition is an approximation to these options that is robust to varying mesh resolutions and scalable to large meshes.We construct convolution layers with kernel  ∶  ×  → ℝ   × +1 by choosing the messages We refer to a neural network containing the aforementioned convolution layer as mesh-based GCN with the following rationale.The neighbourhood of a mesh vertex induces a set of graph edges  by connecting  to all  ∈ B  () ∩ .With this "latent" graph structure (, ) we can make use of efficiently implemented graph deep-learning libraries (like PyG) to realise our layers.Additionally, this GCN can be mesh-based by explicitly incorporating face information in the message passing  =  (, ) .
We distinguish between isotropic and anisotropic convolution layers based on kernel (, ) and aggregation matrix (, ) ∶  × → ℝ   ×  .Intuitively, isotropic convolution filters process all signals mapped to the surrounding vertices in a neighbourhood in the same manner, while anisotropic filters process them distinctly.

Definition 1 (Anisotropy). We call bivariable functions
Consequently, we call a layer anisotropic, if it contains any anisotropic function.

Gauge-equivariant mesh convolution
Defining general anisotropic kernels (, ) on meshes is difficult due to the lack of a local canonical orientation on the mesh: there is no obvious choice of reference vertex  ∈ B  ∩  in the filter support that canonically orients the local filter at  for all  ∈ .To address this, we implement anisotropic kernels using gauge-equivariant mesh (GEM) convolution (de Haan, Weiler, Cohen and Welling, 2021).The idea behind GEM convolution is to recognise that possible kernel orientations are related by group actions of the symmetry group of planar rotations SO(2) and use this insight to spatially orient kernels "along" its group elements.
To achieve this, the signal  ∈ (, ℝ  ) is composed of a linear combination of irreducible representations ("irreps") of the symmetry group SO(2), resulting in so-called SO(2) features.We can then choose an invertible parallel transport matrix composed of group action representations that can rotate signals  using mesh information.Specifically, the tangential plane at each vertex can be determined from the surrounding triangles and geodesic shortest paths between vertices can be found from adjacent faces (Sharp and Crane, 2020).Parallel transport refers to transporting signals along the manifold Ω while maintaining a fixed angle to the shortest geodesic curve.It provides a unique and thus canonical transformation that allows linearly combining vector fields  ∈ (, ℝ  ) at a vertex  ∈  on the mesh.This is required for our notion of convolution Eq. ( 1).
On 2D manifolds Ω embedded in 3D Euclidean space, picking a kernel orientation amounts to picking a locally tangential coordinate system ("gauge").This choice can, on general manifolds, only be made arbitrarily.To prevent this to arbitrarily affect the outcome of the convolution, GEM convolution imposes an equivariance relation between layer input and output.Let  and  ′ be representations of the same (linear) gauge transformation that rotates the feature vector.GEM convolution requires message passing Eq. ( 1) to be equivariant under such transformations.Since all other variables in Eq. ( 1) are fixed, this imposes a linear constraint on the kernel (, ) with solutions A detailed derivation can be found in (de Haan et al., 2021).

Pooling
Hemodynamics are characterised by long-range interactions across the artery wall Ω and the lumen Ω. Capturing these by stacking convolution layers, i.e. linearly increasing the receptive field, becomes infeasible for large and finely discretised surfaces.In contrast, pooling layers can exponentially increase the network's receptive field.Here, we use the mesh's "latent" computation graph (, ) to implement pooling.Similar to the procedure used by Wiersma et al. (Wiersma, Eisemann and Hildebrandt, 2020), we sample a hierarchy of vertex subsets ( =  0 ) ⊃  1 ⊃ ⋯ ⊃   and construct according -radius graph edges   encoding the filter support B   () ∩   for all  ∈   .Additionally, we find disjoint partitions of clusters that relate fine-scale vertices to exactly one coarse-scale vertex.This can be done with -nearest neighbours ( = 1) by finding for each  ∈   the nearest vertex in  +1 .Using these, a pooling operator can be defined as We implement unpooling by simply transporting signals  back to their respective cluster locations:

Input features
We construct input features  in ∶  → ℝ  in with  in channels that describe the local shape of Ω as well as global properties and are computed from the mesh .In particular, we compute a surface normal for each vertex  ∈  from adjacent mesh faces.We then construct three matrices that describe the local neighbourhood  ∈ B  () ∩  by, for each neighbour , taking the outer products of • the vector from  to  with itself, • the surface normal at  with itself, and • the vector from  to  with the surface normal at  For each of the three resulting sets of (3 × 3)-matrices, we take the average over the neighbourhood.Two of these matrices are symmetric by construction, so we can drop entries without losing information.The radius  of the local neighbourhood balls is a hyperparameter and must be chosen based on the structure of the input meshes, so that no neighbourhood is disconnected, i.e. consists of a single vertex.We chose the same radius that is used to construct the mesh's "latent" computation graph ( 0 ,  0 ).
The motivation behind these input features is that they define meaningful local surface descriptors that are not SO(2)-invariant, a precursor to employing GEM convolution (de Haan et al., 2021).In contrast, the vanilla surface normal would simply be constant in any coordinate system induced by the surface normal.Since the surface normal describes the local surface (orientation) in an infinitesimally small neighbourhood B →0 (), i.e. the precise local curvature of the artery wall Ω, it is the preferred input feature for conventional message passing formulations.
We can extend the per-vertex features with any scalar or vector field.Since we assume that hemodynamics depend on flow direction, we append the shortest geodesic distance from each vertex  to the inflow surface, which we compute with the vector heat method (Sharp, Soliman and Crane, 2019).Moreover, we add global parameters such as bloodflow boundary conditions as a constant scalar field over the vertices.

Network output
We predict vector-valued hemodynamic quantities arising from transient, pulsatile flow by discretising a full cardiac cycle at  points in time and let our neural network output a vector field  out ∈ (, ℝ 3 ).Alternatively, we can predict hemodynamic fields under steady flow by setting  = 1.

SE(3) equivariance
We model hemodynamics without the influence of gravity.Therefore, rigid rotation (or translation) of the domain should have no influence on the magnitude of the flow quantities and only change their direction.More precisely, our problem exhibits equivariance under SE(3) transformation.Inducing this symmetry in our neural network makes it oblivious to particular transformations which reduces the problem's complexity.We do so in the form of GEM convolution.GEM convolution layers define message passing intrinsically on the mesh  without dependence on the embedding in the ambient space, such as Euclidean vertex coordinates.SO(2) features can be expressed in ambient coordinates, which is done at the network output.Since tangential planes by definition rotate with the geometric model of the artery, the GEM convolution operator ( *  ) preserves SE(3) equivariance if the tangential input features move along with the surface.

Proposition 1. (Informal) Composition of rotation-equivariant and translation-invariant input features with a gauge-equivariant mesh (graph) convolutional neural network (GEM-GCN) is end-to-end SE(3)-equivariant. (proof in Appendix A)
Our input features  in are equivariant under rotation and invariant under translation of the mesh  by construction.Furthermore, our pooling and unpooling operators  pool and  unpool preserve SE(3) equivariance because they do not depend on the embedding of  in ambient space.Consequently, neural networks composed entirely of GEM convolution and pooling layers yield an end-to-end SE(3)equivariant operator together with our input features  in .

Baseline models
We perform ablation studies to investigate the influence of the anisotropic aggregation matrix (, ) and the anisotropic kernel (, ) on prediction accuracy.To this end, we define two additional types of convolution (Figure 5): one fully isotropic and one with a learned anisotropic aggregation matrix.Additionally, we compare our method to another baseline model, PointNet++ (Qi et al., 2017), a point cloud method without explicit convolution kernels.

Isotropic convolution
We construct purely isotropic convolution by choosing where I is the identity matrix and  ∈ ℝ   × +1 are trainable weights.

Attention-scaled convolution
We construct anisotropic convolution with an isotropic kernel via a learned neighbourhood-attention mechanism by choosing: (1) where (⋅) is the element-wise softmax activation and  ∈ ℝ   × +1 as well as  ∈ ℝ   are trainable weights.This is equivalent to a graph attention layer (Veličković, Cucurull, Casanova, Romero, Liò and Bengio, 2018) with separate weights and no LeakyReLU activation in the attention mechanism.Note that here, the message passing is not mesh-based and only depends on the vertices:  =   .
In our definition of pooling in Sec.3.3 we require the inverse of  for the unpooling step.Since for attentionscaled convolution,  may be ill-conditioned with diagonal elements close to zero, we fall back to using I for pooling.

PointNet++
We compare our method to PointNet++ (Qi et al., 2017), a popular point cloud method consisting of message passing layers that redefine Eq. 1 by where  ≤  +1 denotes the -th component,  → the Euclidean vector pointing from  to , and Θ ∶ ℝ   × ℝ 3 → ℝ  +1 an MLP of arbitrary depth.PointNet++ uses sampling and grouping operations that hierarchically sub-sample the graph vertices in the contracting pathway and interpolate in the expanding pathway.Note that, for PointNet++, choosing the same pooling architecture as for the kernel-based GCNs does not lead to the same level of accuracy, since the convolution paradigms are fundamentally different.Thus, we lay out PointNet++ separately, to achieve the best possible performance.

Quantitative evaluation
Quantitative results for WSS estimation are reported in terms of mean absolute error of the elements of △, normalised by the maximum ground truth magnitude across the test split ("NMAE") and approximation error  ∶= ‖ △ ‖ 2 ∕‖‖ 2 .△ is a vector whose elements are vertex-wise L 2 -normed differences between the network output  out ∈ (, ℝ  out ) and ground truth label  ∈ (, ℝ  out ) so that the -th element of vector △  = ‖ out (  )−(  )‖ 2 and   = ‖(  )‖ 2 for   ∈ .Additionally, we report the maximum and mean vertex-wise difference, i.e. △ max = max{△  }  and △ mean = ( ∑  △  )∕|| as well as the mean of the label statistics max{  }  and median{  }  over the test set for scale.

Experiments and results
We evaluate to what extent GEM-GCN can predict directional wall shear stress on the artery models described in Sec. 2. All datasets are split 80:10:10 into training, validation, and test splits, respectively.Network width and depth are set so that each neural network has around 1.02 × 10 6 trainable weights.All neural networks are trained by stochastic L 1 -loss regression using an Adam optimiser with batches Table 1 Quantitative evaluation of prediction error for steady-flow WSS on synthetic single and bifurcating coronary arteries.The columns list the mean, median, and 75th percentile of NMAE, approximation error , maximum absolute error Δ max , and mean absolute error Δ mean over the held-out test splits.Maximum and median WSS magnitude per dataset are indicated as  max and  median , respectively.We additionally evaluate PointNet++ and GEM-GCN on randomly 3D-rotated test samples with previous training on canonically oriented samples ( † ).In the rotated case we additionally present accuracy metrics for PointNet++ for training on rotationally augmented data ( ‡ ).

Steady-flow WSS estimation
We train GEM-GCN as well as the isotropic GCN (IsoGCN), the attention-scaled GCN (AttGCN), and Point-Net++ (Sec.3.7) to perform WSS estimation in the steadyflow single and bifurcating artery datasets.Fig. 6 shows examples of directional WSS prediction by GEM-GCN in 1 github.com/sukjulian/coronary-mesh-convolutiona single and a bifurcating artery.The examples suggest that there is good agreement between ground truth and prediction.In particular, WSS stemming from local flow vortices is captured well in the single artery model.The quantitative results in Table 1 show that GEM-GCN strictly outperforms IsoGCN and AttGCN on both the single and the bifurcating artery dataset.Moreover, the learned anisotropic convolution filters used in AttGCN achieve better performance than the isotropic filters used in IsoGCN.GEM-GCN and PointNet++ perform similarly in accuracy on the bifurcating artery dataset while GEM-GCN performs marginally better on the single arteries.
We evaluate how the amount of training data affects performance of GEM-GCN, as well as PointNet++ for comparison.Fig. 7 shows mean approximation error  mean as a function of the number of training samples.For each training set size, GEM-GCN is trained from scratch on the single artery dataset, for a number of epochs chosen so that it receives ca.10,000 gradient-descent updates.Since PointNet++ requires more epochs to converge we train it for 80,000 gradient-descent updates for comparison.The results in Fig. 7 indicate that both architectures can reach good accuracy with ca.1000 training samples.

SO(3) equivariance
GEM-GCN only depends on relative vertex features and is trivially invariant to translation.To empirically verify SO(3) equivariance of GEM-GCN, we perform predictions on randomly rotated test samples.For this we use the neural network trained on the original, canonically oriented samples.The results in Table 1 show that rotation does indeed not affect performance of GEM-GCN.All quantitative metrics are nearly identical to those on the nonrotated samples up until numerical errors originating from discretisation of the kernels and activation function (de Haan et al., 2021).In contrast, results show that for PointNet++ (the best-performing baseline model) rotation of test samples drastically reduces prediction accuracy: performance drops from a mean NMAE of 0.5 % to 10.1 % for the single and 0.6 % to 7.8 % for the bifurcating artery dataset, respectively.This is expected as PointNet++ -like previously published models (Liang et al., 2020;Li et al., 2021;Ferez et al., 2021) -depends on the embedding of the mesh vertices in Euclidean space.
In order to make PointNet++ account for differently rotated samples, we re-train it with data augmentation by batch-wise, randomly sampling rotation matrices and applying them to the training samples.This is a common strategy for methods that lack rotation equivariance.Results show that training with this augmentation approximately recovers PointNet++'s accuracy to 0.7 % and 0.6 % mean NMAE for single and bifurcating arteries, respectively.This is slightly lower than before for the single arteries.However

Pulsatile-flow WSS estimation
We train GEM-GCN for pulsatile-flow WSS estimation in single arteries with the modifications described in Sec.3.5.In these experiments, WSS is dependent on both space and time.Therefore, we present estimation accuracy as time-dependent distributions in Fig. 8.The pulsatileflow NMAE over time is comparable to the steady-flow NMAE, suggesting generally accurate predictions.However, the pulsatile-flow NMAE depends on the maximum WSS, which fluctuates over the cardiac cycle.As a consequence, the NMAE fluctuates as well and follows the pattern of the maximum WSS (indicated in yellow).

Incorporating boundary conditions
We re-train GEM-GCN on the dataset of pulsatile-flow WSS in single and bifurcating arteries, subject to varying coronary blood flow boundary conditions.The boundary conditions are passed as average flow rate of a scaled template waveform (Fig. 3).We investigate interpolation between and extrapolation to different boundary conditions outside the limits of the training distribution: As described in Sec. 2,values in [1.87,4.36][4.36, 5.61] ml s require extrapolation, as GEM-GCN is not trained on simulations subject to these inflow values.However, GEM-GCN will produce a prediction based on any arbitrary flow rate.Here, we restrict our analysis to a discrete set of boundary conditions from a continuous range for which we have performed CFD simulation (Sec.2).The NMAE values corresponding to boundary conditions higher than this training range stay below this slope, while the NMAE values corresponding to lower values go above it.Second, we show a Bland-Altman plot comparing neuralnetwork prediction and ground-truth reference.This plot shows that GEM-GCN overestimates WSS for low average magnitude and underestimates WSS for high average magnitude.A large amount of data points corresponding to extrapolation fall within the upper and lower bounds of the distribution of interpolated data points.From these two plots we conclude that GEM-GCN extrapolates to some extent to boundary condition values higher than those in the groundtruth distribution.

Sensitivity to remeshing
Recent works suggest that mesh neural networks might overfit to mesh connectivity (Sharp, Attaiki, Crane and Ovsjanikov, 2022).For the problem of estimating hemodynamics on polygonal surface meshes this means that predictions are not independent of the sampling of vertex positions on the underlying manifold.To investigate the susceptibility of our models to overfitting, we let the trained GEM-GCN and PointNet++ networks described in Sec.4.1 estimate WSS fields on three kinds of remeshed versions of the same surface Ω of a sample from the test set of the single arteries: 1. We randomly sample vertices from Ω and apply Poisson surface reconstruction, followed by an isotropic meshing procedure.This relaxes the mesh refinement around the stenoses and leads to approximately equidistant vertex spacing.2. We globally refine the original mesh  so faces  have smaller edge lengths, while maintaining proportionally higher resolution around the stenoses.3. We randomly sample mesh vertices from Ω, completely randomising vertex placement beyond refinement or coarsening.GEM-GCN extracts mesh information from the vertices and corresponding surface normals, which are well-defined here.Thus, we can do without an explicit mesh in this particular case.
The results in Fig. 10 suggest that GEM-GCN is still able to identify regions of interest on the surface Ω: in the equidistant mesh, it predicts high WSS magnitude in the stenosed area even with different mesh connectivity.However, GEM-GCN does overfit, to some extent, to mesh connectivity: regions of high vertex density, especially in the refined mesh, are predicted to have high WSS magnitude and vice versa.This might be because the training data has higher resolution around stenoses and WSS values are typically highest in stenotic regions.Thus, the network learns that high resolution corresponds to high WSS.The predictions on randomly sampled vertices show artifacts of this behaviour in the form of arbitrary peaks, caused by high local vertex density.This conditioning on resolution may be due to the aggregation scheme (see Equation ( 1)) used by GEM convolution: the filters sum over the vertex neighbourhoods, as opposed to e.g.taking the maximum.PointNet++ seems more robust to remeshing and random surface sampling, perhaps due to its maximum-aggregation (see Equation ( 2)) scheme.

Generalisation to real-life patient data
While we develop and evaluate our method on synthetic data, clinical application would be on anatomies extracted from individual patients.To assess generalisation to such data, we use the same GEM-GCN trained on the bifurcating arteries from Sec. 4.1 and let it predict WSS in a left main coronary bifurcation geometry extracted from a cardiac CT angiography scan (Wolterink, Leiner and Išgum, 2019).We simulate blood flow with the same boundary conditions as in Sec. 2 to obtain ground-truth WSS which takes ca.30 min.Fig. 11 shows the ground truth and estimated WSS vectors.As previously, prediction and geometric pre-processing take less than 5 s.Even though GEM-GCN is trained exclusively on synthetic arteries, it produces a qualitatively plausible prediction.More precisely, the directions of the WSS vectors agree well between prediction and ground truth (mean cosine similarity 0.97).However, there is a considerable quantitative error (NMAE mean 87.4 %,  mean 12.1 %) which can be explained by the highly nonlinear dependence of blood flow on lumen wall shape: even small differences in morphology between the synthetic and real-life arteries can influence hemodynamics to an extent that cannot be easily extrapolated by GEM-GCN.Nevertheless, Fig. 11 suggests that GEM-GCN is able to qualitatively transfer the relation between local surface curvature and WSS.
In contrast to previous works on hemodynamics estimation using deep learning, our method does not require projection to a 1D or 2D domain (Itu et al., 2016;Su et al., 2020;Gharleghi et al., 2020Gharleghi et al., , 2022;;Ferdian et al., 2022), does not disregard connectivity and curvature of the artery wall (Liang et al., 2020;Li et al., 2021), and is independent of the embedding of the mesh in Euclidean space (Ferez et al., 2021).Instead, we operate natively on the geometric representation of the artery.We have demonstrated in Sec. 3 how to exploit rotational and translational symmetry in our problem by an end-to-end SE(3)-equivariant neural network.In contrast, PointNet++ (Sec.3.7) operates in 3D Euclidean coordinate space in which the geometric artery models are expressed.Thus, PointNet++ is implicitly conditioned on the embedding of the input mesh.The only way to correct for this in non-equivariant neural networks is to perform data augmentation during training, effectively adding redundancy.We have demonstrated in Sec.4.2 that recovering the same accuracy as on registered input meshes requires longer training times and leads to lower accuracy.In fact, initial accuracy may never be fully recovered.Thus, when dealing with symmetric problems, GEM-GCN removes the need for roto-translational data augmentation and can lead to improved accuracy and data efficiency.Related to our approach are vector neurons (Deng, Litany, Duan, Poulenard, Tagliasacchi and Guibas, 2021), a SE(3)-equivariant point cloud network.Compared to our method, vector neurons is limited to a particular choice of SO(3)-equivariant linear operation, while GEM-GCN uses an optimal gauge-equivariant linear operation.Finally, MeshCNN (Hanocka, Hertz, Fish, Giryes, Fleishman and Cohen-Or, 2019) has been heavily used for learning on meshes but defines its convolution to be invariant to rotation, sacrificing filter expressiveness compared to GEM-GCN.Related works have modelled hemodynamics under consideration of the Navier-Stokes equations via physics-informed neural networks (PINN) (Arzani et al., 2021;Raissi et al., 2020).This line of works and ours represent two approaches with different use cases: iterative instance optimisation methods allow for incorporation of physics constraints but are slow, while generalising feedforward methods appear black-box but are fast.
Data-driven estimation of hemodynamic fields on the artery wall can be data-hungry (Arzani et al., 2022).To learn how geometry and hemodynamic fields relate, the neural network needs access to a sufficiently large and representative dataset, especially when factoring in patient-specific boundary conditions.In Sec.4.1, we have quantified this data requirement for GEM-GCN.While for large, superficial arteries, a personalised waveform can be obtained via phase-contrast MRI or Doppler ultrasound, in many practical scenarios, e.g. for smaller, deeper arteries, a personalised waveform is difficult to obtain in a non-invasive manner.To account for the latter, in this study, we have reduced the degrees of freedom of our cardiovascular boundary conditions to a single value which we use to scale a template waveform.In theory, we could increase the dimensionality of the boundary condition if we have enough training data, e.g. by parametrising patient-specific waveforms by a polynomial or spline representation.Furthermore, if patientspecific, measured boundary conditions, such as waveforms and blood pressure, are available, our neural networks can be trained with a parametrisation of these.Finding the optimal balance between complexity of the boundary condition and generalisation capabilities of the neural network is specific to the application and data availability.Studying this tradeoff is an interesting direction for future research.Neural networks have previously been found to do well at interpolating, but poorly at extrapolating training data (Arzani et al., 2022).However, we have demonstrated in Sec.4.4 that our method can to some extent extrapolate to different coronary blood flow boundary conditions.Our quantitative results have all been obtained on synthetic artery shapes and we have only provided preliminary results on a patient-specific artery in this work.Nevertheless, we have found that our method mildly generalises to real-life patient data.In future work, we aim to perform further validation on patient data with neural networks trained on synthetic data, which we can easily synthesise.
Additionally, we have investigated an important limitation of our method: accurate predictions require similar mesh connectivity, i.e. our method is sensitive to remeshing of the input surface.We hypothesise that this limitation can be alleviated by data augmentation.We find that PointNet++ is more robust to remeshing, so it can be an option if heterogeneous mesh size is more important than SE(3) symmetry.Furthermore, we see this as an opportunity for discretisationindependent neural networks, e.g.(Sharp et al., 2022).
Our method is based on the observation that WSS and pressure, in the laminar regime, depend in good approximation on artery wall shape and boundary conditions only.This imposes a limitation on our work: in the turbulent regime, this hypothesis may be violated and thus our method would not be applicable.Furthermore, as in recent work by Gharleghi et al. (Gharleghi et al., 2022), we let our neural network output hemodynamic fields over a complete cardiac cycle discretised into fixed time steps simultaneously rather than iterating from one time step to the next, since the cardiac cycle is periodic and clinically relevant in its entirety.This is limiting if we want temporally finer resolved WSS estimation.Extending our approach to volumetric meshes and time-step simulation in future works could enable us to incorporate physical relations based on fluid velocity as additional inductive bias.
Even though we have collected a large dataset of hemodynamic simulations in arteries, we had to be selective with the types of simulations to run.We did not include pulsatileflow fixed-inflow simulations for the bifurcating arteries, due to their extensive computational demand.In future work we could add them, but for now we already have pulsatileflow varying-inflow simulations for the bifurcating arteries and fixed-inflow simulations would have limited additional value.In our simulations, we only varied boundary conditions by average coronary blood flow and kept all else equal to be able to feasibly create a sufficiently large dataset.However, by design our method is not restricted to this simplified, parametrised boundary condition but can be conditioned on an arbitrary parametrisation.With access to a larger, more diverse dataset, we expect our method to be able to adapt to more complex boundary conditions, which is an interesting avenue for future research.In our simulations, we made several assumptions affecting the computed hemodynamics.Since our method mimics the relationship between input geometry and ground truth, as long as the data is consistent, we hypothesise that our method could be retrained to rapidly mimic the results of CFD simulations done by other practitioners.In future work, it would be good to investigate if the complexity of the CFD simulation affects our method's estimation performance positively or negatively, especially w.r.t.very detailed meshes.It should be stated that there has been debate about the real-world, clinical utility of CFD for hemodynamics estimation (Kallmes, 2012;Xiang, Tutino, Snyder and Meng, 2014;Cebral and Meng, 2012;Strother and Jiang, 2012;Robertson and Watton, 2012).In practice, if in-vivo measurements of the desired ground truth are available, e.g.computed from 4D flow MRI, they could be used to train our neural network instead of simulated data.We plan to explore this approach in future work.
In conclusion, we have shown that our proposed method can be a feasible plugin replacement for CFD for the task of fast, personalised estimation of hemodynamic quantities in high resolution on the artery wall.
A. Proof of SE(3) equivariance (Prop.1) An SO(3) representation (ℝ  , ) is a vector space ℝ  with an SO(3) action  ∶ SO(3) → ℝ  × ℝ  .Let SO(2) ⊂ SO(3) be the subgroup that leaves the z-axis invariant.The function  ∶ SO(3) → ℝ  × ℝ  can be restricted | SO(2) ∶ SO(2) → ℝ  × ℝ  to give a representation of SO(2).Proposition 2. Choose input and output SO(3) features (ℝ  in ,  in ) and (ℝ  out ,  out ), which are also SE(3) representations that are invariant to translations.Choose a neural network consisting of GEM convolution, the pooling defined in Sec. 3.3, gauge-equivariant activation functions (de Haan et al., 2021), and parameters such that the input and output For a transformation  ∈ SE(3), denote by  the mesh where all the vertex positions are moved by the translation and rotation of , and the normals and gauges are rotated by the rotation of .
For vertex  ∈ , let  , ∈ SO(3) be the rotation that maps the z-axis of (the ambient space) ℝ 3 to the normal vector of vertex  and maps the x and y axes of ℝ 3 to the x and y axes on the tangent plane of vertex , expressed in the choice of gauge.This is a basis transformation that maps from the global basis to a local basis at point , consistent with the choice of gauge on the tangent plane.Applying this transformation for all vertices in SO(3) representation (, ℝ  ) gives an orthogonal linear transformation ( This composition is equivariant: PROOF.The network only depends on the mesh through the intrinsic quantities of the parallel transport and the logarithmic map, which are equal in  and  expressed in the respective gauges.In particular,  preserves distances and angles, so the neighbourhoods () remain fixed under .Thus, the network is invariant   =   .Furthermore, as the gauge rotates with the transformation, if  is the rotational part of , then  , =  ,  −1 and thus (  ) = (  )•( −1 ).Filling this in leads to Remark 1.In the above, we chose the gauge of the transformed mesh  to equal the rotated gauge of the original mesh .By construction, GEM-GCN is equivariant to the choice of gauge, so any argument that holds for this case extends to the general case as well.
PROOF.The input features defined in Sec.3.4 can be expressed vertex-wise as a 3 ⋅ 3 ⋅ 3 dimensional SO(3) representation, given by the elements of three (3 × 3) matrices:

𝑞
where ⃗  → ∈ ℝ 3 is the vector pointing from  to  and ⃗   is the vertex normal at . Combined, these form a feature   ∈ (, ℝ 27 ) with a SO(3) representation that acts on each matrix by conjugation: ()() =   .This feature is equivariant:   = ()  .When this feature is used as an input to the network, the output is equivariant by Prop.1: F (  ) =  out ()( F (  ))

B. Bifurcating artery synthesis
The artery centerline of the parent vessel, PMV followed by DMV, is developed along seven control points and branches off into the child vessel SB at the fourth control point.The control points are evenly distanced 4 [mm] apart.We construct the bifurcation in the y-z plane of a generic 3D coordinate system and sample two angles from the atlas (Medrano-Gracia et al., 2016) which together fully describe the bifurcation: •  ∼  (  ,  2  ) with mean   = 78.9• and standard deviation   = 23.1 • which is the angle between centerlines of the branches DMV and SB and •  ′ ∼  (  ′ ,  2  ′ ) with mean   ′ = 61.5 • and standard deviation   ′ = 21.5 • which is the angle between the bisecting line of the bifurcation and the centerline of SB.
The angle  ′ describes how much the bifurcation is skewed towards the child branch (Fig. 2).We place the control points so that the angle between the line connecting the fourth and fifth point and the z-axis is  ′ for SB and  −  ′ for DMV.For a more realistic curvature, the angles between the lines connecting the other control points and the z-axis are linearly inter-and extrapolated starting from zero at the origin.To add curvature in x-direction, we sample a third angle  from the atlas: •  ∼  (  ,  2  ) with mean   = 9.5 • and standard deviation   = 21.5 • which is the angle at which the PMV centerline enters the bifurcation plane.
We place the control points so that the angle between the line connecting the third and fourth point and the z-axis is  while linearly inter-and extrapolating the angles between the lines connecting the other control points and the z-axis, starting from zero.To avoid unrealistic curvature, none of these angles must exceed 90 • .The same (constant) curvature extends to both DMV and SB.It is anatomically unlikely for the LCX to curve upwards, so we restrict the SB to curve downwards.To arrive at the final centerline, the branching centerline path is smoothed using non-uniform rational basis splines (NURBS).
We model the vessel lumen with ellipse contours that are arbitrarily oriented in the plane normal to the centerlinecurve tangent.The lumen radii are drawn from the coronary atlas (Medrano-Gracia et al., 2016) 3 , since it is the bifurcation law with the closest value  reported in (Medrano-Gracia et al., 2017).Accordingly, we choose values so that  ≤ 0.165 with the constraints that •  PMV <  DMV or  DMV <  SB , based on the intuitions that the parent vessel should be larger than the child vessel and should not grow after a bifurcation and •  SB  DMV < 0.4 according to empirical evidence from the atlas.
We observe that vessel diameter decreases approximately linearly with vessel length in the relevant interval and linearly decrease it towards the end to 87.5 % its initial size.To give the lumen a more realistic, non-smooth texture, we draw the contour ellipses' semi-minor and semi-major axes from a uniform noise distribution  ( − ,  + ) where  =  and  = 5 %.

Figure 1 :
Figure1: Overview.We propose a gauge-equivariant meshgraph convolutional network (GEM-GCN) to estimate discrete hemodynamic fields mapped to the vertices of a surface mesh of the artery wall.The GCN is powered by anisotropic (spatially-oriented) gauge-equivariant mesh (GEM) convolution with high filter expressivity.The combination of GEM convolution with appropriate input features leads to an endto-end SE(3)-equivariant neural network.

Figure 2 :
Figure 2: Artery datasets.We develop and evaluate our method using two distinct classes of geometric models: synthetic single arteries (left) and bifurcating arteries modelled after the left main bifurcation of the coronary artery tree (right).The single arteries contain flow extensions to let the flow fully develop from a uniform inflow boundary condition.The bifurcating arteries are simulated with parabolic inflow and thus without flow extensions.They consist of the proximal main vessel (PMV) that branches into distal main vessel (DMV) and side branch (SB).Each bifurcation can be described by the angles  and  ′ .

Figure 3 :
Figure3: Pulsatile-flow waveform adapted from(Beier et al., 2016).We linearly scale this waveform for the simulations with varying (average) coronary blood flow boundary condition.

Figure 4 :
Figure4: Network architecture.Our mesh-based GCN outputs time-discretised, pulsatile hemodynamic fields  out ∶  → ℝ  × out , where || = , subject to a (scalar) coronary blood flow parameter, given an input consisting of artery-wall mesh and vertex-wise geodesic distance to the artery inlet.A large receptive field is efficiently obtained using a three-level pooling scheme.To enable deep networks, we employ residual blocks consisting of two convolution modules and skip connection.The per-vertex colour of the signal before and after residual blocks corresponds to the scalar activation mapped to the vertices.

Figure 5 :
Figure 5: Filter comparison.Isotropic, attention-scaled, and GEM convolution use kernels, in comparison to Point-Net++ message passing.While attention-scaled convolution and PointNet++ both learn to distinguish neighbouring vertices through an attention mechanism, GEM convolution is equipped with a notion of direction.

Figure 6 :
Figure 6: Steady-flow WSS estimation of GEM-GCN on arteries of the held-out test splits of the single (left) and bifurcation artery (right) datasets.

Figure 7 :
Figure 7: Mean approximation error  mean over the test split for different training set sizes on the steady-flow singleartery dataset.GEM-GCN weights are updated for ca.10,000 iterations, PointNet++ weights for ca.80,000 iterations.

Figure 8 :
Figure 8: Pulsatile single-artery WSS prediction error across the test split over time.NMAE is normalised by the maximum WSS magnitude over all samples in the test set over time (indicated in yellow) which follows a pulsatile waveform.

Figure 9 :
Figure 9: Conditional, pulsatile single-artery WSS prediction accuracy, subject to changing coronary blood flow boundary condition.Scatter plot (top) shows NMAE over the boundary condition value.Bland-Altman plot (bottom) shows the difference between neural-network prediction and ground-truth reference over their average, collapsed into a scalar value per artery by taking the mean over xyz-components, time, and mesh vertices.The mean of the difference is denoted by  and the standard deviation by .GEM-GCN is trained on boundary conditions in [1.87, 4.36] ml s .Beyond, neural-network predictions are extrapolated.

Figure 10 :
Figure 10: Sensitivity to remeshing.GEM-GCN (left column) and PointNet++ (right column) trained on the original CFD mesh and evaluated on a differently remeshed artery wall Ω.

Fig. 9
Fig.9quantifies the prediction error for varying boundary conditions in two ways: First, we plot (mean) NMAE over coronary blood flow from which we observe the following: within the training range, the infimum of the NMAE displays a linear dependence on the boundary condition.The NMAE values corresponding to boundary conditions higher than this training range stay below this slope, while the NMAE values corresponding to lower values go above it.Second, we show a Bland-Altman plot comparing neuralnetwork prediction and ground-truth reference.This plot shows that GEM-GCN overestimates WSS for low average magnitude and underestimates WSS for high average magnitude.A large amount of data points corresponding to extrapolation fall within the upper and lower bounds of the distribution of interpolated data points.From these two plots we conclude that GEM-GCN extrapolates to some extent to boundary condition values higher than those in the groundtruth distribution.

Figure 11 :
Figure 11: WSS prediction for patient-specific left main coronary bifurcation.Ground truth (left) versus GEM-GCN prediction (right).To produce these results, GEM-GCN is trained purely on the synthetic (steady-flow) bifurcating-artery dataset.Note that the colourbars are in different scales to facilitate qualitative comparison.The colour and size of the WSS vectors scale with magnitude.

Table 2
Dataset overview.We run CFD simulations for synthetic single and bifurcating arteries for steady flow with fixed boundary condition, pulsatile flow with fixed boundary condition, and pulsatile flow with variable boundary conditions.