Offsite Autotuning Approach

Autotuning (AT) is a promising concept to minimize the often tedious manual effort of optimizing scientific applications for a specific target platform. Ideally, an AT approach can reliably identify the most efficient implementation variant(s) for a new platform or new characteristics of the input by applying suitable program transformations and analytic models. In this work, we introduce Offsite, an offline AT approach that automates this selection process at installation time by rating implementation variants based on an analytic performance model without requiring time-consuming runtime tests. From abstract multilevel description languages, Offsite automatically derives optimized, platform-specific and problem-specific code of possible variants and applies the performance model to these variants. We apply Offsite to parallel numerical methods for ordinary differential equations (ODEs). In particular, we investigate tuning a specific class of explicit ODE solvers, PIRK methods, for four different initial value problems (IVPs) on three different shared-memory systems. Our experiments demonstrate that Offsite can reliably identify the set of most efficient implementation variants for different given test configurations (ODE solver, IVP, platform) and effectively handle important AT scenarios.


Introduction
The performance of scientific applications strongly depends on the characteristics of the targeted computing platform, such as, e.g., the processor design, the core topology, the cache architectures, the memory latency or the memory bandwidth. Facing the growing diversity and complexity of today's computing landscape, the task of writing and maintaining highly efficient application code is getting more and more cumbersome for software developers. A highly optimized implementation variant on one target platform, might, however, perform poorly on another platform. That particular poorly performant implementation variant, though, could again potentially outperform all other variants on the next platform. Hence, in order to achieve a high efficiency and obtain optimal performance when migrating an existing scientific application, developers need to tune and adapt the application code for each specific platform anew.

Related Work
A promising concept to avoid this time-consuming, manual effort is autotuning (AT), and many different approaches have been proposed to automatically tune software [2]. AT is based on two core concepts: (i) the generation of optimized implementation variants based on program transformation and optimization techniques and (ii) the selection of the most efficient variant(s) on the target platform from the set of generated variants. In general, there are (i) offline and (ii) online AT techniques. Offline AT tries to select the supposedly most efficient variant at installation time without actual knowledge of the input data. Such approaches are applicable for use-cases, whose execution behavior does not depend on the input data. This is the case, e.g., for dense linear algebra problems, which can, i.a., be tuned offline with ATLAS [23], PATUS [6] and PhiPAC [4]. In other fields, such as sparse linear algebra or particle codes, characteristics of the input data heavily influence the execution behavior. By choosing the best variant at runtime-when all input is known-, online AT approaches such as Active Harmony [22], ATF [17] and Periscope [9] incorporate these influences.
Selecting a suitable implementation variant from a potentially large set of available variants in a time-efficient manner is a big challenge in AT. Various techniques and search strategies have been proposed in previous works to meet this challenge [2]. A straightforward approach is the time-consuming comparison of variants by runtime tests, possibly steered by a single search strategy, such as an exhaustive search or more sophisticated mathematical optimization methods like differential evolution [7] or genetic algorithms [25] or a combination of multiple search strategies [1]. [16] proposes a hierarchical approach that allows the use of individual search algorithms for dependent subspaces of the search space.
As an alternative to runtime tests, analytic performance models can be applied to either select the most efficient variant or to reduce the number of tests required by filtering out inefficient variants beforehand. In general, two categories of performance models are distinguished: (i) black box models applying statistical methods and machine learning techniques to observed performance data like hardware metrics or measured runtimes in order to learn to predict performance behavior [15,20], and (ii) white box models such as the Roofline model [8,24] or the ECM performance model [12,21] that describe the interaction of hardware and code using simplified machine models. For loop kernels, the Roofline and the ECM model can be constructed with the Kerncraft tool [11]. Kerncraft is based on static code analysis and determines ECM contributions from application (assembly; data transfers) and machine information (in-core port model; instruction throughput).

Main Contributions
In this work, we propose Offsite, an offline AT approach that automatically identifies the most efficient implementation variant(s) during installation time based on performance predictions. These predictions stem from an analytic performance prediction methodology for explicit ODE methods proposed by [19] that uses a combined white and black box model approach based on the ECM model. The main contributions of this paper are: (i) We develop a novel offline AT approach for shared-memory systems based on performance modelling. This approach automates the task of generating the pool of possible implementation variants using abstract description languages. For all these variants, our approach can automatically predict their performance and identify the best variant(s). Further, we integrated a database interface for collected performance data which enables the reusability of data and which allows to include feedback from possible online AT or actual program runs.
(ii) We show how to apply Offsite to an algorithm from numerical analysis with complex runtime behavior: the parallel solution of IVPs of ODEs.
(iii) We validate the accuracy and efficiency of Offsite for different test configurations and discuss its applicability to four different AT scenarios.

Outline
Section 2 details the selected example use-case (PIRK methods) and the corresponding testbed. Based on this use-case, Offsite is described in Sect. 3. In Sect. 4, we experimentally evaluate Offsite in four different AT scenarios and on three different target platforms. Section 5 discusses possible future extensions of Offsite and Sect. 6 concludes the paper.

Use-Case: PIRK Methods
As example use-case, we study parallel iterated Runge-Kutta (PIRK) methods [13], which are part of the general class of explicit ODE methods, and solve an ODE system y (t) = r(t, y(t)), y(t 0 ) = y 0 , y ∈ R n by performing a series of time steps until the end of the integration interval is reached. In each time step, a new numerical approximation y κ+1 for the unknown solution y is determined by an explicit predictor-corrector process in a fixed number of sub steps. PIRK methods are an excellent candidate class for AT. Their complex fourdimensional loop structure (Listing 1) can be modified by loop transformations resulting in a large pool of possible implementation variants whose performance behavior potentially varies highly depending on: (i) the composition of computations and memory accesses, (ii) the number of stages of the base ODE method, (iii) the characteristics of the ODE system solved, (iv) the target hardware, (v) the compiler and the compiler flags, and (vi) the number of threads started.

Test Set of Initial Value Problems
In our experiments, we consider a broad set of IVPs ( Table 1) that exhibit different characteristics: (i) Cusp combines Zeeman's cusp catastrophe model for a threshold-nerve-impulse mechanism with the van der Pol oscillator [10], (ii) IC describes a traversing signal through a chain of N concatenated inverters [3], (iii) Medakzo describes the penetration of radio-labeled antibodies into a tissue infected by a tumor [14], and (iv) Wave1D describes the propagation of disturbances at a fixed speed in one direction [5].

Test Set of Target Platforms
We conducted our experiments on three different shared-memory systems ( Table 2). For all experiments, the CPU clock was fixed, hyper-threading disabled and thread binding set with KMP AFFINITY=granularity=fine,compact. All codes were compiled with the Intel C compiler and flags -O3, -xAVX and -fno-alias set.

Offsite Autotuning Approach
In this work, we introduce the Offsite offline AT approach on the example of explicit ODE methods. Before starting a new Offsite run, the tuning scenario desired, which consists of: (i) the pool of possible implementations and program transformations, (ii) the ODE base method(s), (iii) the IVP(s), and (iv) the target platform, is defined using description languages in the YAML standard 1 . From its input data, Offsite automatically handles the whole tuning workflow (Fig. 1). First, Offsite generates optimized, platform-specific and problem-specific code for all kernels and derives all possible implementation variants. Applying an analytic performance prediction methodology, the performance of each kernel is predicted for either (i) a fixed ODE system size n-if specified by the user or prescribed by the ODE 2 -or (ii) a set of relevant ODE system sizes determined by a working set model. The performance of a variant is derived by combining the predictions of its kernels and adding an estimate of its synchronization costs. Variants are ranked by their performance to identify the most efficient variant(s). All obtained prediction and ranking data are stored in a database. For the best ranked variants, Offsite generates optimized, platform-specific and problem-specific code.

Input Description Languages
A decisive, yet cumbersome step in AT is generating optimized code. Often, there is a large pool of possible implementation variants, applicable program transformations (e.g. loop transformations) and tunable parameters (e.g. tile sizes) available. Furthermore, exploiting characteristics of the input data can enable more optimizations (e.g. constant propagation). Writing all variants by hand, however, would be tedious and error-prone and there is demand for automation. In this work, we introduce multilevel description languages to describe implementations, ODE methods, IVPs and target platforms in an abstract way. Offsite can interpret these languages and automatically derives optimized code. 1 components: 2 first: 1 3 size: n-1 4 code: Listing 2 shows the ODE method description format on the example of Radau II A(7) which is a four-stage method with order seven applying six corrector steps per time step. To save space, only an excerpt of the Butcher table is shown with a reduced number of digits.
IVPs are described in the IVP description format shown by IC (Listing 3): (i) components describes the n components of the IVP. Each component contains a code YAML block that describes how function evaluation ) (l. 4, Listing 1) will be substituted during code generation whereby %in is a placeholder for the used input vector Y (k−1) i . Adjacent components that execute the same computation can be described by a single block whereby first denotes the first component and size specifies the total number of adjacent components handled by that particular block.
(ii) constants defines IVP-specific parameters replaced with their actual values during code generation and might possibly enable further code optimizations. In IVP IC, e.g., a multiplication could be saved if electrical resistance R equals 1.0.
Target Platform and Compiler are described using the machine description format introduced by Kerncraft 3 . Its general structure is tripartite: (i) the execution architecture description, (ii) the cache and memory hierarchy description, and (iii) benchmark results of typical streaming kernels. Implementation Variants of numerical algorithms are abstracted by description languages as (i) kernel templates and (ii) implementation skeletons.
Kernel Templates define basic computation kernels and possible variations of this kernel enabled by program transformations that preserve semantic correctness. Listing 4 shows the kernel template description format on the example of APRX, which covers computation (i) datastructs defines required data structures.
(ii) computations describes the computations covered by a kernel template. Each computation corresponds to a single line of code and has an unique identifier (e.g. C1 in Listing 4). Computations can contain IVP evaluations which are marked by keyword %RHS and are replaced by an IVP component during code generation (e.g. for IC by line 5 of Listing 3). Hence, if a kernel template contains %RHS, a separate, specialized kernel version has to be generated for each IVP component. (iii) variants contains possible kernels of a kernel template enabled by program transformations. For each kernel, its workings sets (working sets) and its program code (code) are specified. The code block defines the order of computations and the program transformations applied using four different keywords. Computations are specified by keyword %COMP whose parameter must correspond to one of the identifiers defined in the computations block (e.g. C1 in Listing 4). For-loop statements are defined by keywords %LOOP START and %LOOP END. The first parameter of %LOOP START specifies the loop variable name, the second parameter defines the number of loop iterations, and an optional third parameter unroll indicates that the loop will be unrolled during code generation. In addition, loop-specific pragmas can be added using keyword %PRAGMA.
Implementation Skeletons define processing orders of kernel templates and required communication points. From skeletons, concrete implementation variants are derived by replacing its templates with concrete kernel code. Listing 6 shows the implementation skeleton description format on the example of skeleton A which is a realization of a PIRK method (Listing 1) that focuses on parallelism across the ODE system, i.e its n equations are distributed blockwise among the threads. A contains a loop k over the m corrector steps dividing each corrector step into two templates: RHS computes the IVP function evaluations (l. 5, Listing 1) which are then used to compute the linear combinations (l. 4, Listing 1) in LC. Per corrector step, two synchronizations are needed as RHS -depending on the IVP solved-can potentially require all components of the linear combinations from the last iteration of k. After all corrector steps are computed, the next approximation y κ+1 is calculated by templates APRX and UPD (l. 6, Listing 1). Four keywords suffice to specify skeletons: (i) %LOOP START and %LOOP END define for-loops.
(ii) %COM states communication operations of an implementation skeleton. Skeleton A, e.g., requires 2m + 2 barrier synchronizations. (iii) %KERNEL specifies an executed kernel template. Its parameter must correspond to the name of an available kernel template. During code generation %KERNEL will be replaced by actual kernel code (e.g. APRX in Listing 7).

Rating Implementation Variant Performance
Offsite can automatically identify the most efficient implementation variant(s) from a pool of available variants using analytic performance modelling (Fig. 1): (i) In a first step, Offsite automatically generates code for all kernels in a special code format processable by kerncraft 4 . Kernel code generation (Kernel Code Generation in Fig. 1) includes specializations of the code on the target platform, IVP, ODE method and (if fixed) ODE system size n. Listing 5 exemplary shows the code generated for kernel APRX ji of kernel template APRX (Listing 4) when specialized in ODE method Radau II A(7) and n = 161. As specified in the template description, the j loop is unrolled completely. Further, Butcher table coefficients (b) and known constants (s = 4, n = 161) are substituted.
(ii) In some tuning scenarios, the ODE system size n is not yet known during installation time. Giving predictions for all valid n values, however, is in general not feasible. By applying a working set model (Sect. 3.4), Offsite automatically determines for each kernel a set of relevant n (Kernel Working Sets, Fig. 1) for which predictions are then obtained in the next step.  Fig. 1), which Offsite derives from benchmark data. For each of the kernel codes generated in step (i), its kernel prediction is automatically derived by Offsite (Kernel Prediction, Fig. 1) whereby Kerncraft is used to construct the ECM model. (iv) Using these node-level runtime predictions, Offsite ranks implementation variants by their performance (Impl. Variant Ranking, Fig. 1).
(v) From the ranking of implementation variants, Offsite automatically derives the subset Λ of the best rated variant(s) which contains all variants λ whose performance is within a user-provided maximum deviation from the best rated variant. For each variant of λ, Offsite generates optimized, platform-specific and problem-specific code (Impl. Variant Code Generation, Fig. 1). Listing 7 shows an excerpt of the code generated for a variant of implementation skeleton A which substitutes kernel template APRX with kernel APRX ji and was specialized in ODE method Radau II A (7), IVP IC and n = 161.

Performance Prediction Methodology
The performance prediction methodology applied by Offsite expands [19] and comprises: (i) a node-level runtime prediction of an implementation variant and (ii) an estimate of its intra-node communication costs.
Node-Level Runtime Prediction. Base of the node-level prediction is the analytic ECM (Execution-Cache-Memory) performance model. For an in-depth explanation, we refer to [12,21]. The ECM model gives an estimation of the number of CPU cycles per cache line (CL) required to execute a particular loop kernel on a multi-or many-core chip which includes contributions from the in-core execution time T core and the data transfer time T data : T core is defined as the time required to retire the instructions of a single loop iteration under the assumptions that (i) there are no loop-carried dependencies, (ii) all data are in the L1 data cache, (iii) all instructions are scheduled independently to the ports of the units, and (iv) the time to retire arithmetic instructions and load/store operations can overlap due to speculative execution depending on the target platform. Hence, the unit that takes the longest to retire its instructions determines T core . T level data factors in the time required to transfer all data from its current location in the memory hierarchy to the L1 data cache and back. The single contributions of transfers between levels i and j of the memory hierarchy T data ij are determined depending on the amount of transferred CLs. Depending on the platform used, an optional latency penalty T p ij might be added. In (2) T level data is exemplarily shown for data coming from the L3 cache under the assumption that a latency penalty between L2 and L3 cache has to be factored in on the platform used. Combining all contributions, a single-core prediction can be derived, whereby the overlapping capabilities of the target platform determine whether a contribution is considered overlapping or non-overlapping. Offsite obtains ECM model predictions (3) for all kernels λ using Kerncraft. For each kernel, kernel runtime prediction yields the runtime in seconds of kernel λ, where α λ is (3) computed for a specific number of running cores τ , β λ is the number of loop iterations executed, δ is the number of data elements fitting into one CL and f is the CPU frequency. By summing up the individual kernel runtime predictions φ λ of its basic kernels λ and adding an estimate of its communication costs t com (in seconds), the node-level runtime prediction θ of an implementation variant is given by: Remark: [19] used an older Kerncraft version that could not yet return ECM predictions for multiple core counts τ with a single run, but further returned the kernel's saturation point σ λ . Hence, [19] used an extra factor min(τ, σ λ ) −1 in (4).

Estimate of Intra-node Communication Costs. The costs of the occurring intra-node communication (t com ) depend on the number of communication operations executed. The implementation variants considered in this work, solely use
OpenMP barrier operations to synchronize threads. Offsite automatically benchmarks the costs of the OpenMP barrier operations depending on the number of threads and stores the obtained data in its database for future runs.
Remark: Since this works serves as an introduction to Offsite, we focus on OpenMP-only implementations. The general workflow, however, is also applicable to other communication schemes (e.g. MPI-only or hybrid OpenMP-MPI)-granted suitable benchmarks exist for all communication operations-as t com only influences (5). The applicability of the performance prediction methodology to hybrid OpenMP-MPI implementations on cluster systems was shown in [18].

Reusability of Performance Predictions.
Its database enables Offsite to reuse prediction and ranking data in future Offsite runs. Prediction data (e.g. kernel runtime predictions) collected for a specific implementation variant can be reused to estimate other variants (if they share the kernel) or to estimate other IVPs (if the kernel contains no IVP evaluations). In the context of AT, this is a decisive advantage compared to runtime testing which would require to also run each further added variant or (when switching the IVP) to run all variants anew.

Working Set Model
If the ODE system size n is not fixed-either by the user or restrictions of the IVP-selecting the most efficient implementation variant(s) at installation time leads to an exhaustive search over the possibly vast space of values for n. To minimize the number of predictions required per kernel, the set of estimated n values is reduced by a model-based restriction, the working set of the kernel, which corresponds to the amount of data referenced by a kernel. We use the working sets to identify for each kernel the maximum n that still fit into the single cache levels. Using these maximums, ranges of consecutive n values for which the ECM prediction (3) stays constant 5 can be derived. The medium values of these ranges form the working set of the kernel.

Experimental Evaluation
We validate Offsite using the experimental test bed introduced in Sect. 2. In particular, we study the efficiency of Offsite in four AT scenarios when tuning four different IVPs on three different target platforms and compare the ideal case and four AT strategies: (i) BestVariant covers the case that the most efficient implementation variant is already known (e.g. from previous execution) and no AT is required. (ii) RunAll runs all variants in order to identify the most efficient variant. (iii) OffsitePreselect5 (OffPre5 ) runs an Offsite determined subset of all variants, which contains all variants withing a 5% deviation of the best ranked variant, to identify the most efficient variant of that subset. (iv) OffsitePreselect10 (OffPre10 ) allows a bigger deviation (10%) than OffPre5 and, thus, potentially also runs more variants 6 . While potentially leading to more tuning overhead, OffPre10 might be able to identify the best variant for applications for which predictions are inaccurate and OffPre5 fails. (v) RandomSelect randomly runs 20 of the total 56 variants. Table 3 summarizes the implementation skeletons and kernel templates used in this work. In total, we consider eight skeletons from which 56 implementation variants can be derived. Each table row shows the templates required by a particular skeleton. E.g., skeleton A (Listing 6) uses templates LC, RHS, APRX and UPD. Twelve different variants can be derived from A as there are six different kernels of LC (enabled by loop interchanges, unrolls, pragmas) and two of APRX.   In total, 17 different kernels can be derived from the eight kernel templates available. To predict the performance of all 56 variants, only these 17 kernels have to be estimated. Further, when obtaining predictions off all 56 variants for a different IVP, only those four templates that contain IVP evaluations-and thus their six corresponding kernels-need to be re-evaluated, while prediction data of the remaining kernels can be retrieved from database.

At Scenario -All Input Known
As first test scenario, we consider the case that all input is known at installation time, in particular the ODE system size n. In such cases, Offsite is applied without the working set model. Performance predictions, however, are only obtained for that particular n and a new Offsite run would be required if n changes. Table 4 compares the accuracy and efficiency of the single AT strategies when tuning four different IVPs on three different target platforms for n = 36,000,000 and ODE method Radau II A (7). For Offsite strategies OffPre5 and OffPre10, t step yields the time in seconds it takes to execute a time step using the measured best implementation variant from the subset Λ of variants λ tested by that strategy. Performance loss denotes the percent runtime deviation of that particular measured best variant from the variant selected by BestVariant (t best ). Ideally, an AT strategy correctly identifies the measured best variant and, thus, would suffer no performance loss. For an AT strategy, |Λ| yields the cardinality of subset Λ and the percent tuning overhead of applying that strategy is defined as ttune−|Λ|t best |Λ|t best · 100 where t tune = λ∈Λ t λ is the time required to test all variants and |Λ|t best is the time needed to execute the measured best variant instead.
Haswell. AT strategy RunAll causes a significant tuning overhead for all IVPs, while OffPre5 and OffPre10 only lead to marginal overhead as the subset of tested variants is considerably smaller, while still being able to select the measured best variant for all IVPs but Wave1D.
IvyBridge. Again, RunAll leads to decisive overhead compared to either of the two Offsite strategies and the measured best variant is correctly identified for all IVPs. However, for IVP IC only OffPre10 finds the best variant. As IC is compute-bound (Table 1), the IVP evaluation dominates the computation time while the order of the remaining computations has only minor impact. Hence, already minor jitter can lead to a different variant being selected.
Skylake. Similar observations as on the two previous systems can be made on Skylake. The overhead of both Offsite strategies is marginal compared to RunAll. For all IVPs, the measured best variant is successfully identified.

At Scenario -Unknown ODE System Size
The next scenario considered is that of a still unknown ODE system size n at installation time. In these cases, the working set model is applied to determine a set of sample n values for which Offsite computes predictions and from which predictions for the whole range of possible n are derived. As this requires computing multiple performance predictions, a single Offsite run takes longer than in the previous scenario. This particular Offsite run, however, already covers all possible n and no further run will be required when switching n at a later point. Figures 2 and 3 show for the single implementation variants selected as best variant by the AT strategies considered, the time per time step of IC and Cusp on three platforms (each using their max. number of cores). On the x-axis, n is plotted up to n = 60,000,000. The y-axis shows the time per component of n in seconds needed by a specific variant to solve a time step for Radau II A (7).
Tuning Cusp (Fig. 2). On Haswell (Fig. 2a), OffPre5 and OffPre10 select the same subset of three variants independent of n. Both strategies always correctly identify the measured best variant. The same observations can be made on Ivy-Bridge (Fig. 2b) and on Skylake (Fig. 2c) where also the same subset of three variants is selected and the measured best variant is always found.
Tuning IC (Fig. 3). On Haswell (Fig. 3a), the same subset of one (for OffPre5 ) respectively of two variants (for OffPre10 ) is picked for n up to 8,500,000. For bigger n, both strategies select the same three variants. Except for n = 5,760,000, OffPre10 always correctly finds the measured best variant. The single variant selected by OffPre5 is slightly off for n = 1,440,000 and n = 2,560,000. In both cases, however, the absolute time difference is only marginal. IC is computebound (Table 1) and, thus, the IVP evaluation dominates the computation time. Hence, in particular for small n, the order of the remaining computations has only minor impact on the time and already minor jitter can lead to a different variant being selected. Strategy OffPre5 selects on IvyBridge (Fig. 3b) the same variant for all n while OffPre10 adds two additional variants for n ≥ 2,560,000. While OffPre10 always finds the measured best variant, OffPre5 is slightly off for n = 4,000,000 and n = 5,760,000 but the absolute time difference is only marginal.
On Skylake (Fig. 3c), the same variant is selected for n up to 1,440,000 by both Offsite strategies while for larger n two additional variants are considered. Except for n = 1,440,000 both strategies manage to always correctly identify the measured best variant. As on the two previous systems, the absolute time difference is again only marginal.

At Scenario -Variable Number of Cores
Offsite is capable of predicting the performance of an implementation variant for different core counts τ with a single AT run. In this AT scenario, we consider tuning an IVP for a fixed ODE system size n and multiple core counts. Figure 4 shows the effectiveness of different AT strategies compared to strategy RunAll when tuning IVP IC on three target platforms for n = 9,000,000 and Radau II A (7). On the x-axis, we plot the number of cores τ . The y-axis plots for different AT strategies the percent performance gain Π achieved by applying that particular strategy instead of RunAll which tests all 56 variants (t RA ). The performance gain is defined as tRA−tAT tRA * 100 where t AT includes the time to run the variants Λ tested by that strategy and the time to run the measured best variant from Λ an additional 56 − |Λ| times. Ideally, the bar of an AT strategy would be close to the horizontal line of BestVariant.
Haswell (Fig. 4a). Depending on the number of cores τ , OffPre5 selects different subsets Λ. For τ < 8, the same variant is selected, while for τ = 8 two additional variants are selected. Using OffPre10, these two variants are also included for τ = 4. For all τ , a significant performance gain close to BestImplVariant can be observed with either Offsite strategy. The total performance gain grows with τ as the performance gap between best and worst variant also increases. While outperforming RunAll, RandomSelect is far off from the maximum gain.  Fig. 4. Percent performance gain achieved by different AT strategies when tuning IVP IC for different core counts, Radau II A(7) and n = 9,000,000. Fig. 4b). OffPre5 selects the same variant for all core counts τ . Using OffPre10, two further variants are selected for τ = 20. Again, a considerable performance gain close to BestImplVariant, can be observed for all τ when using either Offsite strategy while RandomSelect is far off from that ideal gain. Skylake (Fig. 4c). Both Offsite strategies select the same three variants for τ = 20, while the same single variant is selected for smaller core counts. As on the two previous target platforms, both Offsite strategies are close to BestImplVariant while RandomSelect is again further off.

At Scenario -Variable ODE Method
In the last AT scenario, we consider tuning an IVP for a fixed ODE system size n for four different ODE methods. Depending on the characteristics of the ODE method, different optimizations might be applicable-for specific number of stages s, e.g., loops over s can be replaced by a vector operation-which potentially results in varying efficiency of the same implementation variant for different ODE methods. Figure 5 shows the effectiveness of different AT strategies when tuning IVP IC on three target platforms for n = 9,000,000 and four different ODE methods: . On the x-axis the ODE method used is shown. The y-axis plots for each AT strategy the percent performance gain Π achieved by applying that particular strategy instead of RunAll which tests all 56 variants. The bar of an AT strategy is ideally close to the horizontal line of BestVariant.
Haswell (Fig. 5a). OffPre5 selects the same subset of two variants for Lobatto III C (6) and Radau I A (5). For Lobatto III C (8) and Radau II A (7), an additional variant is selected. Using OffPre10, these three variants are selected for all ODE methods. For all ODE methods, a significant performance gain close to BestIm-plVariant can be observed when using one of the two Offsite strategies. Further, both strategies decisively outperform RandomSelect.
IvyBridge (Fig. 5b). For all ODE methods, the same single variant is chosen when using OffPre5, while OffPre10 selects two variants for Lobatto III C(6) and the same three variants for Lobatto III C (8) and Radau II A (7). As on Haswell, the performance gain of both Offsite strategies for all ODE methods is close to the maximum gain, while the achieved gain of RandomSelect is far off from BestImplVariant. Skylake (Fig. 5c). Both Offsite strategies select the same subset of three variants for all ODE methods but for Radau I A(5) which only selects two variants when using OffPre5. Again, the performance gain achieved by both strategies is close to BestVariant while RandomSearch is further off.

Future Extensions
Our future work includes expanding Offsite to cluster systems as well as AMD and ARM platforms. Further, we plan to extend Offsite to a combined offlineonline AT approach that incorporates feedback data from previous online AT (or program runs) and to study whether these data can be used to predict the performance in scenarios with unknown input data (e.g. new IVP).

Expansion to Cluster Systems
We expect that extending the approach to cluster systems will raise additional challenges (design-wise and implementation-wise) which could be neglected in the current shared-memory setting: (i) To integrate the costs of inter-node communication operations, additional benchmarks are needed and database tables might have to be adjusted. Furthermore, this requires extending the YAML specifications and implementation variant code generator to support inter-node communication operations.
(ii) Similar to [18], the performance prediction methodology needs to be adapted to incorporate inter-node communication costs.
(iii) For more complex ODEs systems, e.g. ones with many different types of components and differing computation costs, the workflow has to be adjusted slightly. In particular, the load distribution needs to be taken into account.

Extension to a Combined Offline-Online AT Approach
The database plays a vital role in the extension to a combined offline-online AT approach as it is supposed to serve as an interface between both AT phases. Currently, the database stores prediction and ranking data for reuse in future offline runs. For a combined approach, additions and modifications to the database will be necessary to incorporate feedback data from program runs/online AT to verify or improve predictions.

Applicability to Other Programs
The kernel templates used to describe PIRK methods correspond to basic linear algebra functions (e.g. LC is a matrix multiplication). This makes Offsite applicable to more complex applications that can be broken down into linear algebra functions (e.g. PCG solver [12]). In most cases, this is possible without any or only minor extensions to the current YAML specifications. Minor extensions might include supporting additional communication operations or keywords for special operations (e.g. MIN). Major extensions might be needed for applications where the equations themselves or even the number of equations change for different time intervals (e.g. grid resolution). The general approach will not be applicable to highly dynamic and irregular systems like particle simulations (tree codes).

Conclusion
In this work, we have introduced the Offsite AT approach which automates the process of identifying the most efficient implementation variant(s) from a pool of possible variants at installation time. Offsite ranks variants by their performance using analytic performance predictions. To facilitate specifying tuning scenarios, multilevel YAML description languages allow to describe these scenarios abstractly and enable Offsite to automatically generate optimized codes. Moreover, we have demonstrated that Offsite can reliably tune a representative class of parallel explicit ODE methods, PIRK methods, by investigating different AT scenarios and AT strategies on three different shared-memory platforms.