Performance prediction of finite-difference solvers for different computer architectures

The life-cycle of a partial differential equation (PDE) solver is often characterized by three development phases: the development of a stable numerical discretization, development of a correct (verified) implementation, and the optimization of the implementation for different computer architectures. Often it is only after significant time and effort has been invested that the performance bottlenecks of a PDE solver are fully understood, and the precise details varies between different computer architectures. One way to mitigate this issue is to establish a reliable performance model that allows a numerical analyst to make reliable predictions of how well a numerical method would perform on a given computer architecture, before embarking upon potentially long and expensive implementation and optimization phases. The availability of a reliable performance model also saves developer effort as it both informs the developer on what kind of optimisations are beneficial, and when the maximum expected performance has been reached and optimisation work should stop. We show how discretization of a wave equation can be theoretically studied to understand the performance limitations of the method on modern computer architectures. We focus on the roofline model, now broadly used in the high-performance computing community, which considers the achievable performance in terms of the peak memory bandwidth and peak floating point performance of a computer with respect to algorithmic choices. A first principles analysis of operational intensity for key time-stepping finite-difference algorithms is presented. With this information available at the time of algorithm design, the expected performance on target computer systems can be used as a driver for algorithm design.


Introduction
The increasing complexity of modern computer architectures means that developers are having to work much harder at implementing and optimising scientific modelling codes for the software performance to keep pace with the increase in performance of the hardware. This trend is driving a further specialisation in skills such that the geophysicist, numerical analyst and software developer are increasingly unlikely to be the same person. One problem this creates is that the numerical analyst makes algorithmic choices at the mathematical level that define the scope of possible software implementations and optimisations available to the software developer. Additionally, even for an expert software developer it can be difficult to know what are the right kind of optimisations that should be considered, or even when an implementation is "good enough" and optimisation work should stop. It is common that performance results are presented relative to a previously existing implementation, but such a relative measure of performance is wholly inadequate as the reference implementation might well be truly terrible. One way to mitigate this issue is to establish a reliable performance model that allows a numerical analyst to make reliable predictions of how well a numerical method would perform on a given computer architecture, before embarking upon potentially long and expensive implementation and optimization phases. The availability of a reliable performance model also saves developer effort as it both informs the developer on what kind of optimisations are beneficial, and when the maximum expected performance has been reached and optimisation work should stop.
Performance models such as the roofline model by Williams et al. (2009) help establish statistics for best case performanceto evaluate the performance of a code in terms of hardware utilization (e.g. percentage of peak floating point performance) instead of a relative speed-up. Performance models that establish algorithmic optimality and provide a measure of hardware utilization are increasingly used to determine effective algorithmic changes that reliably increase performance across a wide variety of algorithms (Asanovic et al., 2006). However, for many scientific codes used in practice, wholesale algo-rithmic changes, such as changing the spatial discretization or the governing equations themselves, are often highly invasive and require a costly software re-write. Establishing a detailed and predictive performance model for the various algorithmic choices is therefore imperative when designing the next-generation of industry scale codes.
We establish a theoretical performance model for explicit waveequation solvers as used in full waveform inversion (FWI) and reverse time migration (RTM). We focus on a set of widely used equations and establish lower bounds on the degree of the spatial discretization required to achieve optimal hardware utilization on a set of well known modern computer architectures. Our theoretical prediction of performance limitations may then be used to inform algorithmic choice of future implementations and provides an absolute measure of realizable performance against which implementations may be compared to demonstrate their computational efficiency.
For the purpose of this paper we will only consider explicit time stepping algorithms based on a second order time discretization. Extension to higher order time stepping scheme will be briefly discussed at the end. The reason we only consider explicit time stepping is that it does not involve any matrix inversion, but only scalar product and additions making the theoretical computation of the performance bounds possible. The performance of other classical algorithm such as matrix vector products or FFT as described by (Patterson and Hennessy, 2007) has been included for illustrative purposes.

Introduction to stencil computation
A stencil algorithm is designed to update or compute the value of a field in one spatial location according to the neighbouring ones. In the context of wave-equation solver, the stencil is defined by the support (grid locations) and the coefficients of the finite-difference scheme. We illustrate the stencil for the Laplacian, defining the stencil of the The implementation of a time stepping algorithm for a wavefield u, solution of the acoustic wave-equation (Eq. (A.1)) is straightforward from the representation of the stencil. We do not include the absorbing boundary conditions (ABC) as depending on the choice of implementation it will either be part of the stencil or be decoupled and treated separately.
Add Source: u t u t q ( ,.,.,.) = ( ,.,.,.) + end for In Algorithm 1, X Y Z ( , , ) is the set of all grid positions in the computational domain, x y z ( , , ) are the local indices, x y z ( , , ) i i i are the indices of the stencil positions for the centre position x y z ( , , ) and n t is the number of time steps and q is the source term decoupled from the stencil. In the following we will concentrate on the stencil itself, as the loops in space and time do not impact the theoretical performance model we introduce. The roofline model is solely based on the amount of input/output (blue/red in the stencils) and arithmetic operations (number of sums and multiplication) required to update one grid point, and we will prove that the optimal reference performance is indepen-dent of the size of the domain (number of grid points) and of the number of time steps.
Notes on parallelization: Using a parallel framework to improve an existing code is one of the most used tool in the current stencil computation community. It is however crucial to understand that this is not an algorithmic improvement from the operational intensity. We will prove that the algorithmic efficiency of a stencil code is independent of the size of the model, and will therefore not be impacted by a domain-decomposition like parallelization via OpenMP or MPI. The results shown in the following are purely dedicated to help the design of a code from an algorithmic point of view, while parallelization will only impact the performance of the implemented code by improving the hardware usage.

Roofline performance analysis
The roofline model is a performance analysis framework designed to evaluate the floating point performance of an algorithm by relating it to memory bandwidth usage . It has proved to be very popular because it provides a readily comprehensible performance metric to interpret runtime performance of a particular implementation according to the achievable optimal hardware utilization for a given architecture (Williams and Patterson, 2008). This model has been applied to real-life codes in the past to analyze and report performance including oceanic climate models Epicoco et al. (2014), combustion modelling Chan et al. (2013) and even seismic imaging (Andreolli et al., 2014). It has also been used to evaluate the effectiveness of implementation-time optimisations like autotuning , or cache-blocking on specific hardware platforms like vector processors (Sato et al., 2009) and GPUs (Kim et al., 2011). Tools are available to plot the machine-specific parameters of the roofline model automatically (Lo et al., 2014). When more information about the target hardware is available, it is possible to refine the roofline model into the cache-aware roofline model which gives more accurate predictions of performance (Ilic et al., 2014). The analysis presented here can be extended to the cache-aware roofline model but in order to keep it general, we restrict it to the general roofline model.
The roofline model has also been used to compare different types of basic numerical operations to predict their performance and feasibility on future systems (Barba and Yokota, 2013), quite similar to this paper. However, in this paper, instead of comparing stencil computation to other numerical methods, we carry out a similar comparison between numerical implementations using different stencil sizes. This provides an upper-bound of performance on any hardware platform at a purely conceptual stage, long before the implementation of the algorithm.
Other theoretical models to predict upper-bound performance of generic code on hypothetical hardware have been built (Lai and Seznec, 2013;Wahib and Maruyama, 2014;Hofmann et al., 2015a;Duplyakin et al., 2016) but being too broad in scope, can not be used to drive algorithmic choice like choosing the right discretization order. Some of these models have also been applied to stencil codes (Stengel et al., 2015;, however the analysis was of a specific implementation and could not be applied in general. There are many tools to perform performance prediction at the code-level (Hammer et al., 2015;Narayanan et al., 2010;Unat et al., 2015;Rahman et al., 2011). However, any tool that predicts performance based on a code is analyzing the implementation and not the algorithm in general. Although performance modelling is a deep and mature field, most work is restricted to modelling the performance of specific implementations in code. Hofmann et al. (2015b) makes a comparison quite similar to the one we do here where two algorithmic choices for the same problem are being compared with a performance model.
In this section we demonstrate how one creates a roofline model for a given computer architecture, and derives the operational intensity for a given numerical algorithm. This establishes the theoretical upperbound for the performance of a specific algorithm on that architecture. A general roofline performance analysis consists of three steps: • The memory bandwidth, bytes per second, and the peak number of floating point operations per second (FLOPS) of the computer architecture are established either from the manufacturers specification or through measurement using standard benchmarks.
• The operational intensity (OI) of the algorithm is established by calculating the ratio of the number of floating point operations performed to memory traffic, FLOPs per byte. This number characterizes the algorithmic choices that affect performance on a computer system. In combination with the measured memory bandwidth and peak performance of a computer architecture, this provides a reliable estimate of the maximum achievable performance.
• The solver is benchmarked in order to establish the achieved performance. A roofline plot can be created to illustrate how the achieved performance compares to the maximum performance predicted by the roofline for the algorithms OI. This establishes a measure of optimality of the implementation, or alternatively the maximum possible gain from further optimization of the software.

Establishing the roofline
The roofline model characterises a computer architecture using two parameters: the maximum memory bandwidth, B peak , in units of bytes s / ; and the peak FLOPS achievable by the hardware, F peak . The maximally achievable performance F ac is modelled as: where is the OI. As illustrated in Fig. 3 this limitation defines two distinct regions: • Memory-bound: The region left of the ridge point constitutes algorithms that are limited by the amount of data coming into the CPU from memory. Memory-bound codes typically prioritise caching optimisations, such as data reordering and cache blocking.
• Compute-bound: The region right of the ridge point contains algorithms that are limited by the maximum performance of the arithmetic units in the CPU and thus defines the maximum achievable performance of the given architecture. Compute-bound codes typically prioritise vectorization to increase throughput.
It is worth noting that changing from single to double-precision arithmetic halves the OI because the volume of memory that must be transferred between the main memory and the CPU is doubled. The peak performance will be impacted as well, since the volume of data and the number of concurrently used floating point units (FPU) changes. As commonly employed by industry, we assume single precision arithmetic for the examples presented here, but it is straightforward to extend to double precision. Andreolli et al. (2015) illustrates an example of deriving the theoretical performance for a system that consists of two Intel Xeon E5-2697 v2 (2S-E5) with 12 cores per CPU each running at 2.7 Ghz without turbo mode. Since these processors support 256-bit SIMD instructions they can process eight single-precision operations per clock-cycle (SP FP). Further, taking into account the use of Fused Multiply-Add (FMA) operations (two per cycle), this yields Clearly, this assumes full utilization of two parallel pipelines for Add and Multiply operations. A similar estimate for the peak memory bandwidth F peak can be made from the memory frequency ( GHz 1866 ), the number of channels (4) and the number of bytes per channel (8) and the number of CPUs (2) to give F G B y t e s = 1866 × 4 × 8 × 2 = 119 / peak . It is important to note here that there is an instruction execution overhead that the above calculations did not take into account and therefore these theoretical peak numbers are not achievable (≃80% is achievable in practice (Andreolli et al., 2015)). For this reason, two benchmark algorithms, STREAM TRIAD for memory bandwidth McCalpin (1991McCalpin ( -2007 and LINPACK for floating point performance (Dongarra, 1988), are often used to measure the practical limits of a particular hardware platform. These algorithms are known to achieve a very high percentage of the peak values and are thus indicative of practical hardware limitations.

Performance model
The key measure to using the roofline analysis as a guiding tool for algorithmic design decisions and implementation optimization is the operational intensity, , as it relates the number of FLOPs to the number of bytes moved to and from RAM. clearly does not capture many important details about the implementation such as numerical accuracy or time to solution. Therefore, it is imperative to look at in combination with these measures when making algorithmic choices.
Here we analyze the algorithmic bounds of a set of finite-difference discretizations of the wave-equation using different stencils and spatial orders. We therefore define algorithmic operational intensity alg in terms of the total number of FLOPs required to compute a solution, and we assume that our hypothetical system has a cache with infinite size and no latency inducing zero redundancy in memory traffic (Williams and Patterson, 2008). This acts as a theoretical upper bound for the performance of any conceivable implementation.
We furthermore limit our theoretical investigation to analyzing a single time step as an indicator of overall achievable performance. This assumption allows us to generalize the total number of bytes in terms of the number of spatially dependant variables (e.g. wavefields, physical properties) used in the discretized equation as N l s = 4 ( + 2 ) global , where l is the number of variables whose value is being loaded, s is the number of variables whose value is being stored, N is the number of grid points and 4 is the number of bytes per single-precision floating point value. The term s 2 arises from the fact that most computer architectures will load a cache line before it gets overwritten completely. However, some computer architectures, such as the Intel Xeon Phi, have support for stream stores, so that values can be written directly to memory without first loading the associated cache line, in which case the expression for the total data movement becomes N l s = 4 ( + ) global . It is important to note here that limiting the analysis to a single time step limits the scope of the infinite caching assumption above.
Since we have assumed a constant grid size N across all spatially dependant variables, we can now parametrize the number of FLOPs to be computed per time step as is a function that defines the number of flops performed to update one grid point in terms of the stencil size k used to discretize spatial derivatives. Additional terms can be added corresponding to source terms and boundary conditions but they are a small proportion of the time step in general and are neglected here for simplicity. This gives us the following expression for OI as a function of k k , ( )

Operational intensity for finite-differences
We derive a general model for the operational intensity of waveequation PDEs solvers with finite-difference discretizations using explicit time stepping and apply it to three different wave-equation formulations commonly used in the oil and gas exploration community: an acoustic anisotropic wave-equation; vertical transverse isotropic (VTI); and tilted transversely isotropic (TTI) (Liu et al., 2009). The theoretical operational intensity for the 3D discretized equations will be calculated as a function of the finite-difference stencil size k, which allows us to make predictions about the minimum discretization order required for each algorithm to reach the compute-bound regime for a target computer architecture. For completeness we describe the equations in Appendix A.

Stencil operators
As a baseline for the finite-difference discretization, we consider the use of a 1D symmetric stencil of size k, which uses k values of the discretized variable to compute any spatial derivatives enforcing a fixed support for all derivatives. Other choices of discretization are possible, such as choosing the stencil for the first derivative and applying it iteratively to obtain high order derivatives. Our analysis will still be valid but require a rewrite of the following atomic operation count. The number of FLOPs used for the three types of derivatives involved in our equation are calculated as: where in 3D, x i for = 1, 2, 3 i correspond to the three dimensions x y z , , and u is the discretized field. Computing the total wavefield memory volume B global for each equation we have N bytes 4 × 4 for Acoustic (load velocity, two previous time steps and write the new time step), N bytes 9 × 4 for VTI (load velocity, two anisotropy parameters, two previous time steps for two wavefields and write the new time step for the two wavefields) and N bytes 15 × 4 for TTI (VTI plus 6 precomputed cos/sin of the tilt and dip angles). Eq. (2) allows us to predict the increase of the operational intensity in terms of k by replacing B global by its value. The OI k ( ) alg for the three wave-equations is given by: , and plotted as a function of k on Fig. 10. Using the derived formula for the algorithmic operational intensity in terms of stencil size, we can now analyze the optimal performance for each equation with respect to a specific computer architecture. We are using the theoretical and measured hardware limitations reported by Andreolli et al. (2015) to demonstrate how the main algorithmic limitation shifts from being bandwidth-bound at low k to compute-bound at high k on a dualsocket Intel Xeon in Figs. 4-6 and an Intel Xeon Phi in Figs. 7-9.
It is of particular interest to note from Fig. 4 that a 24th order stencil with k=25 provides just enough arithmetic load for the 3D acoustic equation solver to become compute-bound, while k=25 falls just short of the compute-bound region for the VTI algorithm. On the other hand a 6th order stencil with k=7 is enough for the TTI algorithm to become compute-bound due to having a quadratic slope with respect to k (Fig. 10) instead of a linear slope.
At this point, we can define min , which is the minimum OI required for an algorithm to become compute-bound on a particular architecture, as the x-axis coordinate of the ridge point in Figs. 4-6 and 7-9. Note that the ridge point x-axis position changes between the two different architectures. This difference in compute-bound limit shows that a different spatial order discretization should be used on the two architecture to optimize hardware usage. As reported by Andreolli et al. (2015) the min as derived from achievable peak rates is FLOPs byte 9.3 / for the Intel Xeon and FLOPs byte 10.89 / for the Intel Xeon Phi. This entails that while the acoustic anisotropic wave-equation and VTI are memory bound for discretizations up to 24th order, the TTI equation reaches the compute bound region with even a 6th order discretization.    6. Increase in algorithmic OI with increasing stencil sizes on a dual-socket Intel Xeon E5-2697 v2 (Andreolli et al., 2015) for a 3D TTI kernel. The 6th order stencil is already compute-bound.
From the analytical expression derived we can now generalize the derivation of minimum OI values by plotting the simplified expressions for k ( ) alg against known hardware OI limitations, as shown in Fig. 10. We obtain a theoretical prediction about the minimum spatial order required for each algorithm to provide enough arithmetic load to allow implementations to become compute-bound. Most importantly, Fig. 10 shows that the TTI wave-equation has a significantly steeper slope of k ( ), which indicates that it will saturate a given hardware for a much smaller spatial discretization than the acoustic wave or the VTI algorithm.
Moreover, assuming a spatial discretization order of k − 1, we can predict that on the Intel Xeon CPU we require a minimum order of 24 for the acoustic wave solver, 26 for VTI and 6 for TTI. On the Nvidia GPU, with a slightly lower hardware , we require a minimum order of 22 for the acoustic wave solver, 24 for VTI and 6 for TTI, while even larger stencils are required for the Intel Xeon Phi accelerator: a minimum order of 28 for the acoustic wave solver, 30 for VTI and 6 for TTI. This derivation demonstrates that overall very large stencils are required for the acoustic anisotropic solver and VTI to fully utilize modern HPC hardware, and that even TTI requires at least order 6 to be able to computationally saturate HPC architectures with a very high arithmetic throughput, like the Intel Xeon Phi.

Example: MADAGASCAR modelling kernel
We demonstrate our proposed performance model and its flexibility by applying it on a broadly used and benchmarked modelling kernel contained in Madagascar (Sergey et al., 2013). We are illustrating the ease to extend our method to a different wave-equation and by extension to any PDE solver. The code implements the 3D anisotropic elastic wave-equation and is described in Robin and Weiss (2013). We are performing our analysis based on the space order, hardware and runtime described in Robin and Weiss (2013). The governing equation considered is: (., 0) = 0, ( , ) | = 0. (3) where ρ is the density, u i is the ith component of the three dimensional wavefield displacement i ( = 1, 2, 3 for x y z , , ), F is the source term,ϵ is the strain tensor, σ is the stress tensor and c is the stiffness tensor. The equation is discretized with an 8th order star stencil for the first order derivatives and a second order scheme in time and solves for all three components of u. Eq. (3) uses Einstein notations meaning repeated indices represent summation: From this equation and knowing the finite-difference scheme used we can already compute the minimum required bandwidth and Fig. 7. Increase in algorithmic OI with increasing stencil sizes on a Intel Xeon Phi 7120A co-processor (Andreolli et al., 2015) for a 3D acoustic kernel. Unlike the Xeon E5-2697, the 30th order stencil is the smallest one to be compute-bound (vs 24th order). Fig. 8. Increase in algorithmic OI with increasing stencil sizes on a Intel Xeon Phi 7120A co-processor (Andreolli et al., 2015) for a 3D VTI kernel. 32nd is the minimum computebound stencil. It is not equivalent to the acoustic on this architecture. Fig. 9. Increase in algorithmic OI with increasing stencil sizes on a Intel Xeon Phi 7120A co-processor (Andreolli et al., 2015) for a 3D TTI kernel. The 6th order stencil is already compute-bound similarly to the Xeon E5-2697. Fig. 10. Increase in OI with stencil size k and machine-specific minimum OI values for all three hardware architectures considered in this paper. operational intensity. We need to solve this equation for all three components of the wave u at once as we have coupled equations in ϵ and u. For a global estimate of the overall memory traffic, we need to account for loading and storing N 2 × 3 values of the displacement vector and loading N values of ρ. In case the stiffness tensor is constant in space the contribution of c ijkl is 64 independently of N, which yields an overall data volume of N N B y t e s = 4 (6 + 1) + 64 ≃ 28 global . In the realistic physical configuration of a spatially varying stiffness tensor, we would estimate loading N 64 values of c ijkl , leaving us with a data volume of B N N B y t e s = 4 (6 + 1 + 64) = 284 global . Finally we consider symmetries in the stiffness tensor are taken into account reducing the number of stiffness values to load to N 21 and leading to a data volume of B N N B y t e s = (6 + 1 + 21) × 4 = 112 global . The number of valuable FLOPs performed to update one grid point can be estimated by: • 9 first derivatives u k l (∂ , forall , = 1, 2, 3) k l : 9 × (8 mult + 7 add) FLOPs =135 • 9 sums for ϵ kl (9×9 adds) and 9×8 mult for σ ij = FLOPs
The summation of all four contributions gives a total of 441 operations and by dividing by the memory traffic we obtain the operational intensity stiff for variable stiffness and const for constant stiffness: Using the OI values derived above we can now quantify the results presented by Robin and Weiss (0000) by interpreting their runtime results with respect to our performance measure. The achieved GFLOPS have been obtained on the basis of 1000 time steps with 8th order spatial finite-differences and 2nd order temporal finitedifferences. We interpret Fig. 11a) of Robin and Weiss (0000) to give a run time of approximately 53 s and a domain size of N = 225 3 . We obtain with this parameter the following achieved performances: where N t is the number of time steps, and W is the run time. Fig. 11 shows the calculated performance in relation to our predicted algorithmic bounds stiff and const . The use of a constant stiffness tensor puts the OI of the considered equation in the compute-bound region for the benchmarked GPU architecture (NVIDIA GTX480). Assuming a spatially varying stiffness tensor, we can calculate an achieved hardware utilization of 40.5% based on the reported results, assuming an achievable peak memory bandwidth of GByte s 150.7 / , as reported by Konstantinidis and Cotronis (2015) and a maximum achievable performance of GByte s FLOPs Byte GFLOPS 150.7 / × 1.5528 / = 234 . Assuming 80% (Andreolli et al., 2015) of peak performance is achievable, the roofline model suggests that there is still potential to double the performance of the code through software optimization. It is not possible to draw such a conclusion from traditional performance measures such as timings or scaling plots. This highlights the importance of a reliable performance model that can provide an absolute measure of performance in terms of the algorithm and the computer architecture.

Cost-benefit analysis
So far we discussed the design of finite-difference algorithms purely from a performance point of view without regard to the numerical accuracy and cost-to-solution. Now we discuss the impact of the discretization order on the achieved accuracy of the solution and how that, in turn, affects the wall clock time required for computation. To do so, we look at the numerical requirements of a time-stepping algorithm for the wave-equation. More specifically we concentrate on two properties, namely dispersion and stability, in the acoustic case. This analysis is extendable to more advanced wave-equations such as VTI and TTI with additional numerical analysis. The dispersion criteria and stability condition for the acoustic wave-equation is given by (Wu et al., 1996;Lines et al., 1999): 2. a 2 is the sum of the absolute values of the weights of the finitedifference approximation of ∇ 2 u ; 3. v max is the maximum velocity; 4. f max is the maximum frequency of the source term that defines the minimum wavelength for a given minimum velocity λ = min v f min max ; 5. p is the number of grid points per wavelength. The number of grid points per wavelength impacts the amount of dispersion (different wavelengths propagating at different velocities) generated by the finite-difference scheme. The lower the number, the higher the dispersion will be for a fixed discretization order.
These two conditions define the computational setup for a given source and physical model size. Knowing that a 2 increases with the spatial discretization order, Eq. (7) shows that higher discretization orders require a smaller time-step hence increasing the total number of time steps for a fixed final time and grid size. However, higher order discretizations also allow to use less grid points per wavelength (smaller p). A smaller number of grid points per wavelengths leads to a smaller overall computational domain as a fixed physical distance is represented by a coarser mesh and as the grid spacing has been increased, the critical time-step (maximum stable value) is also increased. Overall, high order discretizations have better computational parameters for a predetermined physical problem. From these two considerations, we can derive an absolute cost-to-solution estimation for a given model as a function of the discretization order for a fixed maximum frequency and physical model size. The following results are not experimental runtimes but estimations of the minimum Fig. 11. Roofline model for the 3D elastic anisotropic kernel from (Robin and Weiss, 0000) on a 480-core NVIDIA GTX480 GPU (with hardware specification from Konstantinidis and Cotronis (2015)).
achievable runtime assuming a perfect performance implementation. We use the following setup: • We fix the physical model size as 500 grid point in all three directions for a second order discretization (minimum grid size).
• The number of grid points per wavelength is p=6 for a second order spatial discretization and p=2 for a 24th order discretization and varies linearly for intermediate orders.
• The number of time steps is 1000 for the second order spatial discretization and computed according to the grid size/time step for other spatial orders (Table 1).
The hypothetical numerical setup (with a = 4 1 , second order time discretization) is summarized in Table 2. We combine the estimation of a full experimental run with the estimated optimal performance and obtain an estimation of the optimal time-to-solution for a fixed physical problem. The estimated runtime is the ratio of the total number of GFLOPs (multiply kernel by the number of grid points and time steps) to the maximum achievable performance for this OI. Table 3 shows the estimated runtime assuming peak performance on two systems: a dualsocket Intel Xeon E5-2697 v2 and an Intel Xeon Phi 7120A coprocessor.
We see that by taking advantage of the roofline results in combination with the stability conditions, we obtain an estimate of the optimal cost-to-solution of an algorithm. It can be seen that higher order stencils lead to better hardware usage by lowering the wall-timeto-solution. These results, however, rely on mathematical results based on homogeneous velocity. In the case of an heterogenous model, high order discretizations may result in inaccurate, even though stable and non dispersive, solutions to the wave-equation. The choice of the discretization order should then be decided with more than just the performance in mind.

Conclusions
Implementing an optimising solver is generally a long and expensive process. Therefore, it is imperative to have a reliable estimate of the achievable peak performance, FLOPS, of an algorithm at both the design and optimised implementation stages of development.
The roofline model provides a readily understandable graphical tool, even for a non-specialist, to quickly assess and evaluate the computational effectiveness of a particular implementation of an algorithm. We have shown how the roofline model can be applied to finite-difference discretizations of the wave-equation commonly used in the geophysics community. Although the model is quite simple, it provides a reliable estimate of the peak performance achievable by a given finite-difference discretization regardless of the implementation. Not only does this aid the algorithm designer to decide between different discretization options but also gives solver developers an absolute measure of the optimality of a given implementation. The roofline model has also proved extremely useful in guiding further optimization strategies, since it highlights the limitations of a particular version of the code, and gives an indication of whether memory bandwidth optimisations, such as loop blocking techniques, or FLOPs optimisations, such as SIMD vectorization, are likely to improve results.
However, one should always be mindful of the fact that it does not provide a complete measure of performance and should be complemented with other metrics, such as time to solution or strong scaling metrics, to establish a full understanding of the achieved performance of a particular algorithmic choice and implementation.

Acknowledgements
This work was financially supported in part by the Natural Sciences and Engineering Research Council of Canada Collaborative Research and Development Grant DNOISE II (CDRP J 375142-08) and the Imperial College London Intel Parallel Computing Centre. This research was carried out as part of the SINBAD II project with the support of the member organizations of the SINBAD Consortium.

Appendix A. Wave-equations
In the following equations u is the pressure field in the case of acoustic anisotropic while p r , are the split wavefields for the anisotropic case. We denote by u (., 0) and respectively p r , the value of u for all grid points at time t=0. The physical parameters are m the square slowness, δ ϵ, the Thomsen parameters and θ ϕ , the tilt and azimuth. The main problem with the TTI case is the presence of transient functions cos sin ( , ) known to be extremely expensive to compute (typically about an order of magnitude more expensive than an add or multiply). Here we will assume these functions are precomputed and come from a look-up table, thus only involving memory traffic In the acoustic anisotropic case the governing equations are:   Table 3 Cost-to-solution estimation for several spatial discretizations on fixed physical problem.