Data and performance of an active-set truncated Newton method with non-monotone line search for bound-constrained optimization

In this data article, we report data and experiments related to the research article entitled “A Two-Stage Active-Set Algorithm for Bound-Constrained Optimization”, by Cristofari et al. (2017). The method proposed in Cristofari et al. (2017), tackles optimization problems with bound constraints by properly combining an active-set estimate with a truncated Newton strategy. Here, we report the detailed numerical experience performed over a commonly used test set, namely CUTEst (Gould et al., 2015). First, the algorithm ASA-BCP proposed in Cristofari et al. (2017) is compared with the related method NMBC (De Santis et al., 2012). Then, a comparison with the renowned methods ALGENCAN (Birgin and Martínez et al., 2002) and LANCELOT B (Gould et al., 2003) is reported.


Value of the data
Output data reported represent a benchmark for future comparisons, among algorithms for box constrained optimization.
Output data can be used by other researchers for tuning parameters related to active-set strategies and truncated Newton methods.
Output data highlights how non-monotone line search procedures can be used in combination with active-set strategies.

Data
We report the data related to the numerical experience carried out to assess the performance of algorithm ASA-BCP proposed in [3]. All computations have been run on an Intel Xeon(R), CPU E5-1650 v2 3.50 GHz. The test set consists of 140 bound-constrained problems from the CUTEst collection [10], with dimension up to 10 5 . The stopping condition for all codes is ‖x À½x À gðxÞ ♯ ‖ 1 o 10 À 5 ; where gðÁÞ and ‖ Á ‖ 1 denote the gradient of the objective function, and the sup-norm of a vector, respectively. In the following, we further denote by ½Á ♯ the projection of a vector onto the feasible set ½l; u & IR n .
Following the analysis suggested in [1], we checked whether the codes find different stationary points. The results are then reported distinguishing if all codes find the same stationary point (with a tolerance of 10 À 3 ) or not.

Experimental design, materials and methods
In the following, we provide the implementation details related to ASA-BCP, according to the algorithmic scheme reported in [3]: ASA-BCP is a two-stage algorithmic framework that suitably combines the active-set estimate proposed in [7] with the non-monotone line search procedure described in [5] to handle box constrained optimization problems.
In the first stage of ASA-BCP, an active-set estimate is employed to detect those variables that are at the bounds at a stationary point. The active-set estimate has been used also in the context of mixed-integer convex quadratic programming [2] and of ℓ 1 -regularized problems [6] and depends on a parameter ϵ. In Section 2.1, we detail how to properly update ϵ in order to satisfy the assumptions of Proposition 3.5 in [3].
In the second stage of ASA-BCP, a truncated Newton strategy is used in the subspace of the variables estimated non-active. Details on how we compute the search direction for the minimization with respect to the estimated non-active variables are reported in Section 2.2.
The following notation will be used. Given a feasible point x k produced by ASA-BCP, we write A k l ≔ A l ðx k Þ; A k u ≔ A u ðx k Þ and N k ≔ Nðx k Þ to denote the set of estimated active and non-active sets at x k . In particular, A k l will denote the set of variable estimated to be active at lower bound l and A k u will denote the set of variable estimated to be active at lower bound u. Similarly, given a feasible pointx k produced by ASA-BCP, we writẽ A k l ≔ A l ðx k Þ;Ã k u ≔ A u ðx k Þ andÑ k ≔ Nðx k Þ:

Updating of the ϵ parameter in ASA-BCP
A feature that characterizes ASA-BCP is the use of the active-set estimate: once we have an ϵ satisfying Assumption 3.1 in [3], a decrease of the objective function is guaranteed by fixing the estimated active variables to the values at the bounds (as it is shown in Proposition 3.5 in [3]).
In general, such an ϵ cannot be "a priori" computed. This is why in our implementation we use the following updating rule. Starting from the value ϵ ≔ minf10 À 6 ; ‖x 0 À½x 0 À gðx 0 Þ ♯ ‖ À 3 g, at every iteration k we compute A k l , A k u and N k . Then, we get the point then we acceptx k and do not change the value of ϵ. Otherwise, we do not acceptx k , we reduce ϵ and estimate the active variables again, repeating this procedure until the above relation is satisfied.

Calculation of the search direction in ASA-BCP
At every iteration k, in order to compute the search direction with respect toÑ k , ASA-BCP approximately solves the Newton equation HÑkÑk ðx k Þd kÑ k þgÑk ðx k Þ ¼ 0 by using the conjugate gradient strategy considered in [4]. In particular, let m ¼ jÑ k j, the scheme produces a finite sequence of conjugate directions p 0 ; In our implementation of ASA-BCP, we employed the following stopping criterion for the conjugate gradient method: where qðdÞ ≔ 1 2 d T HÑkÑ k ðx k Þd þ gÑk ðx k Þ T d; In ASA-BCP, we set η k ¼ min 1; 0:1 þe À 0:001 k È É .

Algorithm parameters used in the comparisons
In the following, we report the parameters values of the algorithms we consider in [3] for the comparisons.
For all methods, the stopping condition is ‖x À½x À gðxÞ ♯ ‖ 1 o 10 À 5 : In running ASA-BCP, according to the algorithmic scheme reported in [3], we set Z ≔ 20 and M ≔ 99 (so that, in the non-monotone line search procedure, the last 100 objective function values are included in the computation of the reference value).
In running the other methods, default values are used for all parameters. More specifically: In NMBC [5], a non-monotone strategy is employed (it is similar to the one described for ASA-BCP), where the parameters Z and M are equal to 20 and 100, respectively. Moreover, the parameter ϵ used in the active-set estimate is equal to 10 À 4 .
In ALGENCAN [8], the truncated Newton method is used as inner solver method, the scaling feature is disabled, and the parameter η is equal to 0:1 (a face of the feasible set is abandoned by the algorithm when the norm of the internal components of the continuous projected gradient is smaller than η times the norm of the continuous projected gradient).   Table 3 Comparison between ASA-BCP and NMBC: problems solved in more than 1 s by at least one algorithm and such that both algorithms find different stationary points.    In LANCELOT B [9], a band preconditioner is employed for the conjugate gradient method, with a semi-bandwidth equal to 5. Moreover, a non-monotone strategy is used with a history-length equal to 1.

Comparison between ASA-BCP and NMBC
In Table 1, we report the numerical results obtained by ASA-BCP and NMBC on 140 problems from the CUTEst collection.
In Table 2, we report the 43 problems that have been solved in more than 1 s by at least one algorithm and both algorithms have found the same stationary point (with a tolerance of 10 À 3 ). To be more specific, let f A and f N be the objective function values at the stationary points found, respectively, by ASA-BCP and NMBC when applied to a particular problem. Let f min ¼ minff A ; f N g. We consider that ASA-BCP and NMBC have found the same stationary point if f A À f min max 1; f min È Éo 10 À 3 and f N À f min max 1; f min È Éo 10 À 3 : In Table 3, we report the problems that have been solved in more than 1 s by ASA-BCP or NMBC and such that both algorithms have found different stationary points (with a tolerance of 10 À 3 ).
All the tables include the following data: the name (Problem) and the dimension (n) of the problems considered, the objective function value at the stationary point found (obj), the number of function evaluations (f-eval), the total number of conjugate gradient iterations (cg-it) and the computational time in seconds (time).

Comparison among ASA-BCP, ALGENCAN and LANCELOT B
In Table 4, we report the numerical results obtained by ASA-BCP, ALGENCAN and LANCELOT B on 140 problems from the CUTEst collection.
In Table 5, we report the 62 problems that have been solved in more than 1 s by at least one algorithm and all the algorithms have found the same stationary point (with a tolerance of 10 À 3 ). To be more specific, let f AS , f AL and f LB be the objective function values at the stationary points found, respectively, by ASA-BCP, ALGENCAN and LANCELOT B when applied to a particular problem. Let f min ¼ minff AS ; f AL ; f LB g. We consider that ASA-BCP, ALGENCAN and LANCELOT B find the same stationary point if f AS À f min max 1; f min È Éo 10 À 3 ; f AL À f min max 1; f min È Éo 10 À 3 and f LB Àf min max 1; f min È Éo 10 À 3 : In Table 6, we report the problems that have been solved in more than 1 s by at least one algorithm among ASA-BCP, ALGENCAN and LANCELOT B and such that all the algorithms have found different stationary points (with a tolerance of 10 À 3 ).
All the tables include the following data: the name (Problem) and the dimension (n) of the problems considered, the objective function value at the stationary point found (obj), the number of Table 6 Comparison between ASA-BCP, ALGENCAN and LANCELOT B: problems solved in more than 1 s by at least one algorithm and such that all the algorithms find different stationary points. function evaluations (f-eval), the total number of conjugate gradient iterations (cg-it) and the computational time in seconds (time).

Transparency document. Supporting information
Transparency document associated with this article can be found in the online version at https:// doi.org/10.1016/j.dib.2018.11.061.