Fast Bayesian hyperparameter optimization on large datasets ∗

: Bayesian optimization has become a successful tool for optimizing the hyperparameters of machine learning algorithms, such as support vector machines or deep neural networks. Despite its success, for large datasets, training and validating a single conﬁguration often takes hours, days, or even weeks, which limits the achievable performance. To accelerate hyperparameter optimization, we propose a generative model for the validation error as a function of training set size, which is learned during the optimization process and allows exploration of preliminary conﬁgura-tions on small subsets, by extrapolating to the full dataset. We construct a Bayesian optimization procedure, dubbed Fabolas , which models loss and training time as a function of dataset size and automatically trades oﬀ high information gain about the global optimum against computational cost. Experiments optimizing support vector machines and deep neural networks show that Fabolas often ﬁnds high-quality solutions 10 to 100 times faster than other state-of-the-art Bayesian optimization methods or the recently proposed bandit strategy Hyperband.


Introduction
The performance of many machine learning algorithms hinges on certain hyperparameters. For example, the prediction error of non-linear support vector * This paper is an extended version of our AISTATS 2017 conference paper Klein et al. (2017a) 4946 A. Klein et al. machines depends on regularization and kernel hyperparameters C and γ; and modern neural networks are sensitive to a wide range of hyperparameters, including learning rates, momentum terms, number of units per layer, dropout rates, weight decay, etc. (Montavon et al., 2012). The poor scaling of naïve methods like grid search with dimensionality has driven interest in more sophisticated hyperparameter optimization methods over the past years (Bergstra et al., 2011;Hutter et al., 2011;Bergstra and Bengio, 2012;Snoek et al., 2012;Bardenet et al., 2014;Bergstra et al., 2014;Swersky et al., 2013Snoek et al., 2014Snoek et al., , 2015. Bayesian optimization has emerged as an efficient framework, achieving impressive successes. For example, in several studies, it found better instantiations of convolutional network hyperparameters than domain experts, repeatedly improving the top score on the CIFAR-10 (Krizhevsky, 2009) benchmark without data augmentation (Snoek et al., 2012;Domhan et al., 2015;Snoek et al., 2015).
In the traditional setting of Bayesian hyperparameter optimization, the loss of a machine learning algorithm with hyperparameters x ∈ X is treated as the "black-box" problem of finding arg min x∈X f (x), where the only mode of interaction with the objective f is to evaluate it for inputs x ∈ X. If individual evaluations of f on the entire dataset require days or weeks, only very few evaluations are possible, limiting the quality of the best found value. Human experts instead often study performance on subsets of the data first, to become familiar with its characteristics before gradually increasing the subset size (Bottou, 2012;Montavon et al., 2012). This approach can still outperform contemporary Bayesian optimization methods.
Motivated by the experts' strategy, here we leverage dataset size as an additional degree of freedom enriching the representation of the optimization problem. We treat the size of a randomly subsampled dataset N sub as an additional input to the blackbox function, and allow the optimizer to actively choose it at each function evaluation. This allows Bayesian optimization to mimic and improve upon human experts when exploring the hyperparameter space. In the end, N sub is not a hyperparameter itself, but the goal remains a good performance on the full dataset, i.e. N sub = N .
While in this paper we focus on hyperparameter optimization for large datasets, in principle, our method could also be applied to other scenarios where cheap but potentially biased and noisy approximations of the actual objective function are available, such as, for instance, in the work by Kandasamy et al. (2016), which introduces a Bayesian optimization variant that can optimize expensive functions by exploiting cheaper fidelities. Our method's only assumption is that one can define a proper basis function to describe the similarity between the objective function and its approximations. Another interesting application would be to likelihood-free inference, where Bayesian optimization has been successfully applied before (Gutmann and Corander, 2016).
Hyperparameter optimization for large datasets has been explored by other authors before. Our approach is similar to Multi-Task Bayesian optimization by Swersky et al. (2013), where knowledge is transferred between a finite number of correlated tasks. If these tasks represent manually-chosen subset-sizes, this method also tries to find the best configuration for the full dataset by evaluating smaller, cheaper subsets. However, the discrete nature of tasks in that approach requires evaluations on the entire dataset to learn the necessary correlations. Instead, our approach exploits the regularity of performance across dataset size, enabling generalization to the full dataset without evaluating it directly.
Other approaches for hyperparameter optimization on large datasets include work by Nickson et al. (2014), who estimated a configuration's performance on a large dataset by evaluating several training runs on small, random subsets of fixed, manually-chosen sizes. Krueger et al. (2015) showed that, in practical applications, small subsets can suffice to estimate a configuration's quality, and proposed a cross-validation scheme that sequentially tests a fixed set of configurations on a growing subset of the data, discarding poorly-performing configurations early. Li et al. (2017) proposed a multi-arm bandit strategy, called Hyperband, which dynamically allocates more and more resources to randomly sampled configurations based on their performance on subsets of the data. Hyperband assures that only well-performing configurations are trained on the full dataset while discarding bad ones early. Despite its simplicity, in their experiments the method was able to outperform well-established Bayesian optimization algorithms.
The remainder of the paper is structured as follow: In §2, we review Bayesian optimization, in particular the Entropy Search algorithm (Hennig and Schuler, 2012) on which our method is based. In §3 we show that subsets of the training data are often sufficient to reason about the performance of a hyperparameter configuration. In §4 we then present previous approaches such as Multi-task Bayesian optimization and Hyperband. In §5, we introduce our new Bayesian optimization method Fabolas for hyperparameter optimization on large datasets. In each iteration, Fabolas chooses the configuration x and dataset size N sub predicted to yield most information about the loss-minimizing configuration on the full dataset per unit time spent. Finally, in §6, a broad range of experiments with support vector machines and various deep neural networks show that Fabolas often identifies good hyperparameter settings 10 to 100 times faster than state-of-the-art Bayesian optimization methods acting on the full dataset, as well as Hyperband.

Bayesian optimization
Given a black-box function f : X → R, Bayesian optimization 1 aims to find an input x ∈ arg min x∈X f (x) that globally minimizes f . It requires a prior p(f ) over the objective function f and an acquisition function a p(f ) : X → R quantifying the utility of an evaluation at any x. Popular choices for the objective model are Gaussian processes (Snoek et al., 2012) (see Section 2.1), random forests (Hutter et al., 2011) or Bayesian neural networks (Snoek et al., 2015;Springenberg et al., 2016).
With these ingredients, the following three steps are iterated (Brochu et al., 2010): (1) find the most promising x n+1 ∈ arg max a p (x) by numerical optimization; (2) evaluate the expensive and often noisy function y n+1 ∼ f (x n+1 ) + N (0, σ 2 ) and add the resulting data point (x n+1 , y n+1 ) to the set of observations D n = (x j , y j ) j=1...n ; and (3) update p(f | D n+1 ) and a p(f |Dn+1) . Algorithm 1 shows pseudo code for Bayesian optimization. Typically, evaluations of the acquisition function a are cheap compared to evaluations of f such that the optimization effort is negligible.

Gaussian processes
Gaussian processes (GP) are a prominent choice for p(f ), thanks to their descriptive power and analytic tractability (e.g. Rasmussen and Williams, 2006). Formally, a GP is a collection of random variables, such that every finite subset of them follows a multivariate normal distribution. A GP is identified by a mean function m (often set to m(x) = 0 ∀x ∈ X), and a positive definite covariance function (kernel) k(x, x ). Given observations D n = (x j , y j ) j=1...n = (X, y) with joint Gaussian likelihood p(y | X, f(X)), the posterior p(f |D n ) follows another GP, with mean and covariance functions of tractable, analytic form.
The covariance function determines how observations influence the prediction. For the hyperparameters we wish to optimize, we adopt the Matérn 5 /2 kernel (Matérn, 1960), in its Automatic Relevance Determination form (MacKay and Neal, 1994). This stationary, twice-differentiable model constitutes a relatively standard choice in the Bayesian optimization literature. In contrast to the Gaussian kernel popular elsewhere, it makes less restrictive smoothness assumptions, which can be helpful in the optimization setting (Snoek et al., 2012): Here, θ and λ are free parameters-hyperparameters of the GP surrogate model-and An additional hyperparameter of the GP model is a overall noise covariance needed to handle noisy observations. For clarity: These GP hyperparameters are internal hyperparameters of the Bayesian optimizer, as opposed to those of the target machine learning algorithm to be tuned. Section 5.4 shows how we handle them.

Acquisition functions
The role of the acquisition function is to trade off exploration vs. exploitation. Popular choices include Expected Improvement (EI) (Mockus et al., 1978), Upper Confidence Bound (UCB) (Srinivas et al., 2010), Entropy Search (ES) (Hennig and Schuler, 2012), and Predictive Entropy Search (PES) (Hernández-Lobato et al., 2014). In our experiments, we will use EI and ES. We found EI to perform robustly in most applications, providing a solid baseline; it is defined as ( 2) where f min is the best function value known (also called the incumbent). This expected drop over the best known value is high for points predicted to have small mean and/or large variance. Its performance is, in our experience, comparable to UCB which is why we do not include it in our later experiments. Both ES and PES, estimate the information about the location of the minimum as the measure of utility. This quantity takes global information into account contrasting the local nature of EI and UCB. The difference between ES and PES stems from different approximations made to compute the acquisition function, but no conceptual distinction. Why we decided to use ES over PES is discussed in Section 5.4. Due to the complexity of the algorithm and to provide the necessary detail to extend ES to our method, the following section contains a detailed introduction.

Entropy search
Entropy Search is a more recent acquisition function that selects evaluation points based on the predicted information gain about the optimum, rather than aiming to evaluate near the optimum. At the heart of ES lies the probability , the belief about the function's minimum given the prior on f and observations D. Given p(f ), the probability that a point is the minimum is defined with suggestive notation as where Θ is the Heaviside step function. The product in this equation is over an infinite domain (yet well-defined if p(f |D) is sufficiently regular). In practice, it has to be represented in a finite form. We follow the approach of Hennig and Schuler (2012), who approximate p(f |D) by a finite-dimensional Gaussian over an irregular grid of points r 1 , ..., r Z , which are designed heuristically to provide good interpolation resolution on p min . Like Hennig and Schuler (2012), we sample these so called representer points using Expected Improvement. This step reduces p min to a discrete distribution, and turns the infinite product in Equation 3 into a finite one. That distribution itself is still analytically intractable, but an analytically tractable (in particular, differentiable) approximation q min (r j ) of good empirical quality can be computed using Expectation Propagation (EP) (Minka, 2001), of computational cost O(Z 4 ). EP does not only yield p min , but also the gradient with respect to means and covariances of the model at the representer points allowing efficient computations after an expensive initial calculation of these quantities. This particular application of EP (dubbed EPMGP) to Gaussian integrals was introduced by Cunningham et al. (2012) where all the details can be found.
The information gain at x is then measured by the expected Kullback-Leibler divergence (relative entropy) between p min (· | D ∪{(x, y)}) and the uniform distribution u(x), with expectations taken over the measurement y to be obtained at x: where The primary numerical challenge in this framework is the computation of p min (· | D ) and the integral above. Due to the intractability, several approximations have to be made. Algorithm 2 provides pseudocode for our implementation of Entropy Search. Lines 1-12 precompute various quantities that are needed for evaluating the acquisition function, which is optimized in line 13. Specifically, after sampling K hyperparameter settings from the marginal loglikelihood for the GP using MCMC (line 1), for every hyperparameter setting θ i , the algorithm • fits a GP (line 4), • samples representer points with respect to a EI (line 5),

Algorithm 2 Selection of next point by Entropy Search
Store LogEI values of the representer points U ∈ R K×Z 8: Let μ, Σ be the mean and covariance matrix at r 1 , . . . , r Z based on M (i) 9: p min [i] ← computePmin (μ, Σ) Probability of each r 1 , . . . , r Z to be the minimum. 10: For p = 1, . . . , P : ωp ∼ N (0, I Z ) Stochastic change to hallucinate P values at representer points 11: Store stochastic change for the innovations Ω ∈ R K×Z×P 12: end for 13: Fast Bayesian hyperparameter optimization on large datasets 4951 • stores the representer points and their logarithmic EI values (lines 6 and 7), • computes μ and Σ for the joint predictive distribution at the representer points (line 8), • computes p min given μ and Σ, using EPMGP (line 9), • draws random points from a normal distribution centered at 0 and unit variance (line 10) for the innovation in Algorithm 4, and stores them (line 11) for later usage.
Given these quantities, Algorithm 3 then computes the ES acquisition function from Equation 4.
For each hyperparameter θ i of the GP, it then carries out the following steps: Let M (i) be the trained model on D with hyperparameters θ i 4: Let μ, σ 2 be the predictive mean and variance at x based on M (i) 5: Let μ, Σ be the mean and covariance matrix at r 1 , . . . , r Z based on M (i) 6: for j = 0, . . . P do Averages over all hallucinated values. 7: Change in the posterior believe at r 1 , . . . , r Z if we would evaluate at x 8: q min ← computePmin (μ + Δμ, Σ + ΔΣ) New Pmin of the updated posterior 9: end for 12: end for 13: return 1 K a(x) This algorithm in turns makes use of Algorithm 4 to compute the innovations, which • computes the change in the mean Δμ by first computing the correlation Σ(x, r) of x and the representer points r 1 , . . . , r Z and multiplying it with the Cholesky decomposition of the k(x, x) and the vector ω ∈ Ω. Note that this change is stochastic (line 1). • computes the change of the covariance (line 2) which is deterministic Despite the conceptual and computational complexity of ES, it offers a welldefined concept for information gained from function evaluations, which can be meaningfully traded off against other quantities, such as the evaluations' cost.

Reasoning across dataset subsets
The runtime of machine learning algorithms usually scales polynomially with the number of data points N sub , i.e. O (N α sub ) for some positive α. While the computational cost of training grows, the loss of machine learning methods usually decreases with the number of training samples. The computational cost is often largely independent of the hyperparameter values, but the loss depends crucially on them (which is the reason we want to optimize them in the first place).
For an intuition on how performance changes with dataset size, we evaluated a grid of 400 configurations of a support vector machine (SVM) on subsets of the MNIST dataset (LeCun et al., 2001); MNIST has N = 50 000 data points and we evaluated relative subset sizes s := N sub/N ∈ { 1 /512, 1 /256, 1 /128, . . . , 1 /4, 1 /2, 1}. Figure 1 visualizes the validation error (top) and training time (bottom) of these configurations on s = 1 /128, 1 /16, 1 /4, and 1. Evidently, just 1 /128 of the dataset is quite representative and sufficient to locate a reasonable configuration. Additionally, there are no deceiving local optima on smaller subsets. The training time, however increases substantially with the number of datapoints, single configurations take only a few seconds to train on s = 1 /128 but can take up to a few hours on the full dataset. Based on these observations, we expect that relatively small fractions of the dataset yield representative performances and therefore vary our relative size parameter s on a logarithmic scale.

Previous work
Making use of dataset subsets to seed up hyperparameter optimization has been investigated by others before. In this Section we will present two approaches that are similar to ours, namely Multi Task Bayesian Optimization and Hyperband.

Multi-task Bayesian optimization
The Multi-Task Bayesian optimization (MTBO) method by Swersky et al. (2013) refers to a general framework for optimizing in the presents of different, but correlated tasks. Given a set of such tasks T = {1, . . . , T }, the objective function f : X × T → R corresponds to evaluating a given x ∈ X on one of the tasks t ∈ T. The relation between points in X×T is modeled via a GP using a product kernel: The kernel k T is represented implicitly by the Cholesky decomposition of k(T, T) whose entries are sampled via MCMC together with the other hyperparameters of the GP. By considering the distribution over the optimum on the target task , and computing any information w.r.t. it, Swersky et al. (2013) use the information gain per unit cost as their acquisition function 2 : where D = D ∪ {(x, t, y)}. The expectation represents the information gain on the target task averaged over the possible outcomes of f (x, t) based on the current model. If the cost c(x, t) of a configuration x on task t is not known a priori it can be modelled the same way as the objective function. This model supports machine learning hyperparameter optimization for large datasets by using discrete dataset sizes as tasks. Swersky et al. (2013) indeed studied this approach for the special case of T = {0, 1}, representing a small and a large dataset; this will be a baseline in our experiments.

Hyperband
Hyperband (Li et al., 2017) is a multi-arm bandit strategy based on random search. It was developed concurrently with our method 3 , and, similar to it, makes uses of the principle that hyperparameter configurations performing poorly on subsets of the data are very likely to also perform poorly on the full datasets.
In each iteration i, Hyperband samples n i configurations randomly and uses successive halving  to discard hyperparameter configurations after evaluating them on subsets of the data. Hyperband iteratively calls successive halving with different tradeoffs between breadth (i.e., number of configurations) and depth (i.e., subset size), such that each iteration takes roughly the same time. Hyperband returns its first suggested hyperparameter setting after its first run of successive halving.

Fabolas
Here, we introduce our new approach for FAst Bayesian Optimization on LArge data Sets (Fabolas). While traditional Bayesian hyperparameter optimizers model the loss of machine learning algorithms on a given dataset as a blackbox function f to be minimized, Fabolas models loss and computational cost across dataset size and uses these models to carry out Bayesian optimization with an extra degree of freedom. The blackbox function f : X × R → R now takes another input representing the data subset size; we will use relative sizes s = N sub /N ∈ [0, 1], with s = 1 representing the entire dataset. While the eventual goal is to minimize the loss f (x, s = 1) for the entire dataset, evaluating f for smaller s is usually cheaper, and the function values obtained correlate across s. Unfortunately, this correlation structure is initially unknown, so the challenge is to design a strategy that trades off the cost of function evaluations against the benefit of learning about the scaling behavior of f and, ultimately, about which configurations work best on the full dataset. Following the nomenclature of Williams et al. (2000), we call s ∈ [0, 1] an environmental variable that can be changed freely during optimization, but that is set to s = 1 (i.e., the entire dataset size), at evaluation time.
We propose a principled rule for the automatic selection of the next (x, s) pair to evaluate. In a nutshell, where standard Bayesian optimization would always run configurations on the full dataset, we use ES to reason about, how much can be learned about performance on the full dataset from an evaluation at any s. In doing so, Fabolas automatically determines the amount of data necessary to (usefully) extrapolate to the full dataset.

Modelling loss and computational cost
To transfer the insights from the illustrative example in Section 3 into a formal model for the loss and cost across subset sizes, we extend the GP model by an additional input dimension, namely s ∈ [0, 1]. This allows the surrogate to extrapolate to the full data set at s = 1 without necessarily evaluating there. We chose a factorized kernel, consisting of the standard stationary kernel over hyperparameters, multiplied with a finite-rank ("degenerate") covariance function in s: Since any choice of the basis function φ yields a positive semi-definite covariance function, this provides a flexible language for prior knowledge relating to s. We use the same form of kernel to model the loss f and cost c, respectively, but with different basis functions φ f and φ c . The loss of a machine learning algorithms usually decreases with more training data. We incorporate this behavior by choosing φ f (s) = (1, (1 − s) 2 ) T to enforce monotonic predictions with an extremum at s = 1. This kernel choice is equivalent to Bayesian linear regression with these basis functions and Gaussian priors on the weights.
To model computational cost c, we note that the complexity usually grows with relative dataset size s. To fit polynomial complexity O(s α ) for arbitrary α and simultaneously enforce positive predictions, we model the log-cost and use φ c (s) = (1, s) T . As above, this amounts to Bayesian linear regression with shown basis functions. Figure 2 shows some examples of our basis functions. Figure 3 visualizes the scaling of loss and cost with s for some random SVM configurations from Section 3.

Algorithm description
Fabolas starts with an initial design, described in more detail in Section 5.3. Afterwards, at the beginning of each iteration it fits GPs for loss and computational cost across dataset sizes s using the kernel from Eq. 7. Then, capturing the distribution of the optimum for s = 1 using p s=1 min (x | D) := p(x ∈ arg min x ∈X f (x , s = 1) | D), it selects the maximizer of the following acquisition function to trade off information gain versus cost:  Our proposed acquisition function resembles the one used by MTBO (Eq. 6), with two differences: First, MTBO's discrete tasks t are replaced by a continuous dataset size s (allowing to learn correlations without evaluations at s = 1, and to choose the appropriate subset size automatically). Second, the prediction of computational cost is augmented by the overhead of the Bayesian optimization method. This inclusion of the reasoning overhead is important to appropriately reflect the information gain per unit time spent: it does not matter whether the time is spent with a function evaluation or with reasoning about which evaluation to perform. In practice, due to cubic scaling in the number of data points of GPs and the computational complexity of approximating p s=1 min , the additional overhead of Fabolas is within the order of minutes, such that differences in computational cost in the order of seconds become negligible in comparison. 4 Being an anytime algorithm, Fabolas keeps track of its incumbent at each time step. To select a configuration that performs well on the full dataset, it predicts the loss of all evaluated configurations at s = 1 using the GP model and picks the minimizer. We found this to work more robustly than globally minimizing the posterior mean, or similar approaches.

Initial design
It is common in Bayesian optimization to start with an initial design of points chosen at random or from a Latin hypercube design to allow for reasonable GP models as starting points. To fully leverage the speedups we can obtain from evaluating small datasets, we bias this selection towards points with small (cheap) datasets in order to improve the prediction for dependencies on s: We draw k random points in X (k = 10 in our experiments) and evaluate them on different subsets of the data (for instance on the support vector machine experiments we used s ∈ { 1 /64, 1 /32, 1 /16, 1 /8}). This provides information on scaling behavior, and, assuming that costs increase linearly or superlinearly with s, these k function evaluations cost less than k 4 function evaluations on the full dataset. This is important as the cost of the initial design, of course, counts towards Fabolas' runtime.

Implementation details
The presentation of Fabolas above omits some details that impact the performance of our method. As it has become standard in Bayesian optimization (Snoek et al., 2012), we use Markov-Chain Monte Carlo (MCMC) integration to marginalize over the GPs hyperparameters (we use the emcee package (Foreman-Mackey et al., 2013)). To accelerate the optimization, we use hyper-priors to emphasize meaningful values for the parameters, chiefly adopting the choices of the spearmint toolbox (Snoek et al., 2012): a uniform prior between [−10, 2] for all length scales λ in log space, a lognormal prior (μ a = 0, σ 2 a = 1) for the covariance amplitude θ, and a horseshoe prior with length scale of 0.1 for the noise variance σ 2 .
We used the original formulation of ES by Hennig and Schuler (2012) rather than the recent reformulation of PES by Hernández-Lobato et al. (2014). The main reason for this is that the latter prohibits non-stationary kernels due to its use of Bochner's theorem for a spectral approximation. PES could in principle be extended to work for our particular choice of kernels (using an Eigen-expansion, from which we could sample features); since this would complicate making modifications to our kernel, we leave it as an avenue for future work, but note that in any case it may only further improve our method. To maximize the acquisition function we used the blackbox optimizer DIRECT (Jones, 2001).

Heteroscedastic noise
When making the subset size a parameter, we shuffle the data before an evaluation to prevent bias incurred by repeatedly using the same subset. This shuffling introduces additional noise which could be particularly high for small subsets. To investigate this, we again used the SVM grid of 400 configurations from the Section 3. We repeated each run with a given subset size K = 10 times using different subsets, and estimate the observation noise variance at each point as: where μ i,j = K −1 K k=1 y k (x j , s i ). The red points in Figure 4 show the mean and standard deviation of σ 2 obs (x j , s i ) over all configurations for all s i values considered. As expected, the noise decreases with an increasing s, to a point where σ 2 obs is zero for s = 1. In contrast to this heteroscedastic noise intrinsic to the random subsampling, the commonly used noise hyperparameter σ 2 of a GP (call it σ 2 GP ) is fixed and typically estimated using MCMC sampling. To compare these two noise values, for each fixed size s, we also trained a GP to predict losses and plotted its estimates σ 2 GP as blue markers in Figure 4. To obtain a good estimate of the GP's hyperparameters, we used a relatively long MCMC chain compared to the ones used during Bayesian optimization. Figure 4 clearly shows that the estimated variance σ 2 GP is always larger than the observation noise σ 2 obs . This might indicate a certain misfit between the true objective and the space of functions the GP can model (Sollich, 2002). Consequently, we believe the heteroscedastic noise from subsampling the data to often be negligible compared to the noise estimated by the MCMC sampling.

Experiments
For our empirical evaluation of Fabolas, we compared it to standard Bayesian optimization (using EI and ES as acquisition functions), MTBO, and Hyperband. For each method, we tracked wall clock time (counting both optimization overhead and the cost of function evaluations, including the initial design), storing the incumbent returned after every iteration. In an offline validation step, we then trained models with all incumbents on the full dataset and measured their test error. 5 To obtain error bars, we performed 10 independent runs of each method with different seeds (except on the grid experiment, where we could afford 30 runs per method) and plot mean and standard deviation for all experiments. Each optimization trajectory starts after all of its runs have evaluated at least one configuration. 6 We implemented Hyperband following Li et al. (2017) using the recommended setting for the parameter η = 3 that controls the intermediate subset sizes. For each experiment, we adjusted the budget allocated to each Hyperband iteration to allow the same minimum dataset size as for Fabolas: 100 datapoints for the support vector machine benchmarks and the maximum batch size for the neural network benchmarks. We also followed the prescribed incumbent estimation after each iteration as the configuration with the best performance on the full dataset size.

Support vector machine surrogate
First, we considered a benchmark allowing the comparison of the various Bayesian optimization methods on ground truth: we trained a random forest surrogate (Eggensperger et al., 2015) on our SVM grid on MNIST (described in Section 3), for which we had performed all function evaluations beforehand.
We used this benchmark to adjust the number of data points for MTBO's auxiliary task. Figure 5 (left) evaluates MTBO variants with a single auxiliary task with a relative size of 1 /4, 1 /16, 1 /32, and 1 /512, respectively. We found that the smaller the auxiliary task, the faster MTBO improved initially, but the slower it converged to the optimum. In the plot, MTBO with an auxiliary task of relative size s = 1 /512 did not achieve the same performance as the other variants in the end. Given the global structure of the error surface (see Figure 1) and the super-linear scaling of the SVM, we chose a very conservative auxiliary task with s = 1 /4 for the remaining experiments. This value worked consistently in our experience, although the convergence to the best solution in some of the later benchmarks was still rather slow.
At first glance, one might expect many tasks (e.g., with a task for each s value above) to work best, but quite the opposite is true. In preliminary experiments, we evaluated MTBO with up to 3 auxiliary tasks (s = 1 /4, 1 /32, and 1 /512), but found performance to strongly degrade with a growing number of tasks. We suspect that the |T | 2 kernel parameters that have to be learned for the discrete task kernel for |T | tasks are the main reason. If the MCMC sampling is too short, the correlations are not appropriately reflected, especially in early iterations; and an adjusted longer sampling creates a large computational overhead that dominates wall-clock time. We consistently obtained the best performance with only one auxiliary task.
We can now proceed to compare the different methods on this benchmark. The right panel of Figure 5 shows results using EI, ES, random search, Hyperband, MTBO and fabolas. EI and ES performed equally well and found the best configuration (which yields an error of 0.014, or 1.4%) after around 10 5 seconds, roughly three times faster than random search. Hyperband outperformed EI and ES by roughly one order of magnitude. MTBO achieves good performance faster, requiring only around 2 × 10 3 seconds to find close-to-optimal solutions. Fabolas was roughly another order of magnitude faster than MTBO in finding good configurations, and found close-to-optimal solutions at the same time.

Support vector machines
For a more realistic scenario, we optimized the same SVM hyperparameters (see Table 1) without a surrogate on MNIST and two other prominent UCI datasets (gathered from OpenML (Vanschoren et al., 2014)), vehicle registration (Siebert, 1987) and forest cover types (Blackard and Dean, 1999) with more than 50000 data points. Training SVMs on these datasets can take several hours, and Figure  6 shows that Fabolas found good configurations for them between 10 and 1000 times faster than the other methods. On the other hand, both Fabolas and MTBO sometimes converged more slowly to the true optimum after their initial improvement. This could be a consequence of the GP model and the respective assumptions about the correlation across dataset sizes. Hyperband constitutes a very competitive optimizer on these benchmarks; the super-linear complexity of the SVM and lower cost of good configurations allow Hyperband to recommend its first incumbent faster than the BO methods operating on the full data set. We experimented with hyperparameter optimization for CNNs on two wellestablished object recognition datasets, namely CIFAR10 (Krizhevsky, 2009) and SVHN (Netzer et al., 2011). We used the same setup for both datasets (a CNN with three convolutional layers, with batch normalization (Ioffe and Szegedy, 2015) in each layer, optimized using Adam (Kingma and Ba, 2014)). We considered a total of five hyperparameters: the initial learning rate, the batch size and the number of units in each layer (see Table 2). For CIFAR10, we used 40000 images for training, 10000 to estimate validation error, and the standard 10000 hold-out images to estimate the final test performance of incumbents. For SVHN, we used 6000 of the 73257 training images to estimate validation error, the rest for training, and the standard 26032 images for testing.
The results in Figure 7 show that-compared to the SVM tasks-Fabolas' speedup was smaller because CNNs scale linearly in the number of datapoints. Nevertheless, it found good configurations about 10 times faster than vanilla Bayesian optimization. For the same reason of linear scaling, Hyperband was substantially slower than vanilla Bayesian optimization to make a recommendation, but it did find good hyperparameter settings when given enough time.

Residual neural networks
In the final experiment, we evaluated the performance of our method further on a more expensive benchmark, optimizing the validation performance of a deep residual network on the CIFAR10 dataset, using the original architecture from He et al. (2015). As hyperparameters we exposed the learning rate, L 2 regularization, momentum and the factor by which the learning rate is multiplied after 41 and 61 epochs (see Table 3). Table 3 Hyperparameters for the deep residual network task.

Hyperparameter lower bound upper bound log
Learning rate 10 −6 1 X L 2 regularization 10 −6 1 X Learning rate factor 10 −4 1 X Momentum 0.1 0 .999 Figure 8 shows that Fabolas found configurations with reasonable performance roughly 10 times faster than ES and MTBO. As in the previous convolutional neural network experiment Hyperband's first recommendation for an incumbent takes longer than for the Bayesian optimization methods. However, after the first round of successive halving it already found a very good configuration and only improves slightly in the next iterations.

Conclusion
We presented Fabolas, a new Bayesian optimization method based on Entropy Search that mimics human experts in evaluating algorithms on subsets of the data to quickly gather information about good hyperparameter settings. Fabolas extends the standard way of modelling the objective function by treating the dataset size as an additional continuous input variable. This allows the incorporation of strong prior information. It models the time it takes to evaluate a configuration and aims to evaluate points that yield-per time spent-the most information about the globally best hyperparameters for the full dataset. In various hyperparameter optimization experiments using support vector machines and deep neural networks, Fabolas often found good configurations 10 to 100 times faster than the related approach of Multi-Task Bayesian optimization, Hyperband and standard Bayesian optimization. Our open-source code is available at https://github.com/automl/RoBO, along with scripts for reproducing our experiments.
In future work, we plan to expand our algorithm to model other environmental variables, such as the resolution size of images, the number of classes, and the number of epochs, and we expect this to yield additional speedups. Since our method reduces the cost of individual function evaluations but requires more of these cheaper evaluations, we expect the cubic complexity of Gaussian processes to become the limiting factor in many practical applications. We therefore plan to extend this work to other model classes, such as Bayesian neural networks (Neal, 1996;Hernández-Lobato and Adams, 2015;Blundell et al., 2015;Springenberg et al., 2016;Klein et al., 2017b), which may lower the computational overhead while having similar predictive quality.