Non-parametric Approximate Dynamic Programming via the Kernel Method

This paper presents a novel, non-parametric approximate dynamic programming (ADP) algorithm that enjoys dimension-independent approximation and sample complexity guarantees. We obtain this algorithm by “kernelizing” a recent mathematical program for ADP (the “smoothed approximate linear program”). Loosely, our guarantees show that we can exchange the importance of choosing a good approximation architecture a priori (as required by existing approaches) with sampling effort. We also present a simple active set algorithm for solving the resulting quadratic program, and prove the correctness of this method. Via a computational study on a controlled queueing network, we show that our approach is capable of outperforming parametric linear programming approaches to ADP, as well as non-trivial, tailored heuristics for the same network, even when employing generic, polynomial kernels.


Introduction
Problems of dynamic optimization in the face of uncertainty are frequently posed as Markov decision processes (MDPs). The central computational problem is then reduced to the computation of an optimal 'value' or 'cost-to-go' function that encodes the value garnered under an optimal policy starting from any given MDP state. MDPs for many problems of practical interest frequently have intractably large state spaces precluding exact computation of the cost-to-go function. Approximate dynamic programming (ADP) is an umbrella term for algorithms designed to produce good approximations to this function. Such approximations then imply a natural 'greedy' control policy.
ADP algorithms are often formulated in parametric settings. Here, the user specifies an 'approximation architecture' (i.e., a set of basis functions) and the algorithm then produces an approximation in the span of this basis. The strongest theoretical results available for such algorithms typically share the following two features: * The third author was partially supported by by NSF Grant CMMI-1235023. A preliminary version of this work appeared as Bhat et al. (2012).
• The quality of the approximation produced is comparable with the best possible within the basis (i.e., the parameterization or 'approximation architecture') specified.
• The sample complexity or computational effort required for these algorithms scales, typically polynomially, with the dimension of the basis.
These results highlight the importance of selecting a 'good' approximation architecture, and remain somewhat dissatisfying in that additional sampling or computational effort cannot remedy a bad approximation architecture.
In contrast, an ideal non-parametric approach would, in principle, free the user from carefully specifying a suitable low-dimensional approximation architecture. Instead, the user would have the liberty of selecting a very rich architecture (whose specification is potentially implicit, such as via a kernel). The quality of the approximation produced by the algorithm would then improvegracefully -with the extent of computational or sampling effort expended, ultimately becoming exact. While they may fall short of the ideal notion of a non-parametric ADP algorithm, there are several existing proposals for non-parametric ADP available: Kernelizing Policy Evaluation. These are approaches that are motivated by approximate policy iteration. The key computational step in approximate policy iteration methods is 'policy evaluation'. The aim of this step is to find the cost-to-go function for a given policy. This step involves solving the projected Bellman equation, a linear stochastic fixed point equation. Due to the curse of dimensionality, solving this exactly is infeasible and a portion of the ADP literature focuses on approximately solving the policy evaluation step. This usually involves approximating the cost-togo function for a particular policy with a parametric approximation architecture. Non-parametric approaches of this flavor attempt to make this step non-parametric. Bethke et al. (2008) study the problem of minimizing Bellman errors from the point of view of kernel support vector regression and Gaussian processes. Engel et al. (2003) consider a generative model for the cost-to-go function which is based on Gaussian processes. Xu et al. (2007) introduce the non-parametric version of the LSTD-(λ) method, a numerically stable approach to do policy evaluation. Farahmand et al. (2009a) study regularized versions of some popular policy evaluation methods and perform the sample complexity analysis for one policy evaluation step. It is difficult to provide convergence guarantees for approximate policy iteration schemes in parametric settings, and these difficulties remain in non-parametric variations. It is consequently difficult to characterize the computational effort or sample complexity required to produce a good approximation with such approaches. In practice, these theoretical shortcomings are often bypassed with heuristics such as stopping after a fixed number of iterations. In contrast, the approach we study in this paper will admit sample complexity and approximation guarantees, but at the expense of assuming access to a certain idealized sampling distribution over states.
Local averaging. Another approach to non-parametric ADP has been to use kernel-based local averaging to approximate the solution of an MDP with that of a simpler variation on a sampled state space (e.g., Ormoneit and Sen, 2002;Ormoneit and Glynn, 2002;Barreto et al., 2011).
Convergence rates for these local averaging methods are exponential in the dimension of the problem state space. As in our setting, Dietterich and Wang (2002) construct kernel-based cost-to-go function approximations.These are subsequently 'plugged in' to various ad hoc optimization-based ADP formulations. This 'plug in' approach is inappropriate since it effectively translates to a non-parametric regression approach with no regularization whatsoever. One cannot expect generalization guarantees for the resulting function class, and indeed the authors are unable to provide guarantees. Practically, the absence of regularization is also likely to hurt performance as is borne out in our experiments. Similarly, Ernst et al. (2005) replace the local averaging procedure used for regression by Ormoneit and Sen (2002) with non-parametric regression procedures such as tree-based learning methods. This is done again without any theoretical justification. Pazis and Parr (2011) discuss a non-parametric method that explicitly restricts the smoothness of the value function. However, sample complexity results for this method are not provided and it appears unsuitable for high-dimensional problems (such as, for instance, the queuing problem we consider in our experiments).
Feature selection via 1 -penalty (parametric). Closely related to our work, Kolter andNg (2009) andPetrik et al. (2010) consider modifying the approximate linear program with an 1regularization term to encourage sparse approximations in the span of a large, but necessarily tractable set of features. In contrast to this line of work, our approach will allow for approximations in a potentially full-dimensional (i.e., of dimension equal to the cardinality of the state space) approximation architecture, with a constraint on an appropriate 2 -norm of the weight vector to provide regularization.
In addition, there are still other non-parametric proposal that do not quite fall into the three categories above. For example, Farahmand et al. (2009b) study what is effectively a kernelized version of a specific approximate value iteration algorithm. Again, it is not possible to provide performance guarantees of the type we present in this paper since doing so in the parametric case for approximate value iteration is already hard; see De Farias and Van Roy (2000).
This paper presents what we believe is a practical, non-parametric ADP algorithm that enjoys approximation and sample complexity guarantees that serve as natural counterparts to those available for LP approaches to ADP. In greater detail, we make the following contributions: • A new mathematical programming formulation. We rigorously develop a kernel-based variation of the 'smoothed' approximate LP (SALP) approach to ADP proposed by Desai et al. (2012). The resulting mathematical program, which we dub the regularized smoothed approximate LP (RSALP), is distinct from simply substituting the local averaging approximation above in the SALP formulation. We develop a companion active set method that is capable of solving this mathematical program rapidly and with limited memory requirements.
• Theoretical guarantees. 1 Our algorithm can be interpreted as solving an approximate lin-1 These guarantees come under assumption of being able to sample from a certain idealized distribution. This is a common requirement in the analysis of ADP algorithms that enjoy approximation guarantees for general MDPs (de Farias and Van Roy, 2004;Van Roy, 2006;Desai et al., 2012). It is possible to enhance these guarantees by ear program in a (potentially infinite dimensional) Hilbert space. We provide a probabilistic upper bound on the approximation error of the algorithm relative to the best possible approximation one may compute in this space subject to a regularization constraint. We show that the sample complexity of our algorithm grows polynomially as a function of a regularization parameter. As this regularization parameter is allowed to grow, so does the set of permissible approximations, eventually permitting the best approximation within the architecture which, for some of the architectures we consider, is exact.
The sampling requirements for our method are independent of the dimension of the approximation architecture. Instead, they grow with the allowed complexity of the approximation.
This result can be seen as the 'right' generalization of the prior parametric approximate LP approaches of de Van Roy (2003, 2004); Desai et al. (2012), where, in contrast, sample complexity grows with the dimension of the approximating architecture.
• A computational study. To study the efficacy of our approach, we consider an MDP arising from a challenging queueing network scheduling problem. We demonstrate that our method yields significant improvements over tailored heuristics and parametric ADP methods, all while using a generic high-dimensional approximation architecture. In particular, these results suggest the possibility of solving a challenging high-dimensional MDP using an entirely generic approach.
Relative to so-called 'model free' approaches to approximate DP based on simulation (such as TD-learning), we note that the method we propose in the paper relies on the knowledge of the transition probabilities of the MDP. Further we require the number of possible next states for any state action pair to tractable. One can rely on sampling when this is not the case, but a rigorous analysis of that approach is beyond the scope of this paper; see Haskell et al. (2016) for a general analysis of so-called 'empirical' Bellman operators. These issues are germane to linear programming-based ADP methods broadly.
The organization of the paper is as follows: In Section 2, we formulate an infinite dimensional LP for our problem, and present an effective approximate solution approach. Section 3 provides theoretical guarantees for the quality of the approximations computed via our non-parametric algorithm. Theorem 1 in that Section provides our main guarantee. In Section 4, we provide an active set method that can be used to efficiently solve the required quadratic optimization problem central to our approach while respecting practical memory constraints. We also establishing the correctness of our active set approach. Section 5 describes a numerical study for a criss-cross queueing system benchmarking our approach against approximate linear programming approaches and tailor made heuristics. Section 6 concludes.
showing how they degrade when the sampling distribution one has access to differs from this idealized distribution; we do not pursue this extension here.

Preliminaries
Consider a discrete time Markov decision process with finite state space S and finite action space A. We denote by x t and a t respectively, the state and action at time t. For notational simplicity, and without loss of generality, we assume that all actions are permissible at any given state. We assume time-homogeneous Markovian dynamics: conditioned on being at state x and taking action a, the system transitions to state x with probability p(x, x , a) independent of the past. A policy is a map µ : S → A, so that represents the expected (discounted, infinite horizon) cost-to-go under policy µ starting at state x, with the discount factor α ∈ (0, 1). Letting Π denote the set of all policies, our goal is to find an optimal policy µ * such that µ * ∈ argmax µ∈Π J µ (x) for all x ∈ S (it is well known that such a policy exists). We denote the optimal cost-to-go function by J * J µ * . An optimal policy µ * can be recovered as a 'greedy' policy with respect to J * , where we define the expectation E x,a [f (X )] particular, Desai et al. (2012) propose solving the following optimization problem: (2) Here κ > 0 is a penalty parameter and π ∈ R S + is a strictly positive distribution on S. In considering the above program, notice that if one insisted that the slack variables s were precisely 0, the program (2) would be identical to (1), with the additional restriction to value function approximations of the form J(x) = z Φ(x). This case is known as the approximate linear program (ALP), and was first proposed by Schweitzer and Seidman (1985). de Farias and Van Roy (2003) provided a pioneering analysis that, stated loosely, showed for an optimal solution z * to the ALP. Desai et al. (2012) showed that these bounds could be improved upon by 'smoothing' the constraints of the ALP, i.e., permitting positive slacks. The resulting program (2) is called the smoothed approximate linear program (SALP). The ALP constraints impose the restriction that T J ≥ J, where T is the Bellman operator defined by for all x ∈ S and J : S → R. This constraint ensures that the optimal solution lower bounds the optimal value function at each state. But at the cost of providing a lower bound, the ALP might do a poor job of approximating J * due to presence of constraints associated with certain states that are rarely visited. The SALP prevents this by allowing violations of the Bellman constrains, but weighting the violations in a way that they are strongly penalized at highly visited states. For a more detailed interpretation of SALP, see Section 3 of Desai et al. (2012). In both the original ALP, as well as the SALP, one can solve a 'sampled' version of the above program for problems with large state-spaces. Now, consider allowing Φ to map from S to a general (potentially infinite dimensional) Hilbert space H. We use bold letters to denote elements in the Hilbert space H, e.g., the weight vector is denoted by z ∈ H. We further suppress the dependence on Φ and denote the elements of H corresponding to their counterparts in S by bold letters. Hence, for example, x Φ(x) and where b is a scalar offset corresponding to a constant basis function. 2 The following generalization of (2) -which we dub the regularized SALP (RSALP) -then essentially suggests itself: The only new ingredient in the program above is the fact that we regularize the weight vector z using the parameter Γ > 0. Penalizing the objective of (5) according to the square of the norm z H z, z anticipates that we will eventually resort to sampling in solving this program. In a sampled setting, regularization is necessary to avoid over-fitting and, in particular, to construct an approximation that generalizes well to unsampled states. This regularization, which plays a crucial role both in theory and practice, is easily missed if one directly 'plugs in' a local averaging approximation in place of z Φ(x), as is the case in the earlier work of Dietterich and Wang (2002), or a more general non-parametric approximation as in the work of Ernst et al. (2005).
Since the RSALP of (5) can be interpreted as a regularized stochastic optimization problem, one may hope to solve it via its sample average approximation. To this end, define the likelihood ratio w x ν x /π x , and letŜ ⊂ S be a set of N states sampled independently according to the distribution π. The sample average approximation of (5) is then We call this program the sampled RSALP. Even if |Ŝ| were small, it is still not clear that this program can be solved effectively: if H were infinite dimensional this is a semi-infinite linear program. We will, in fact, solve the dual to this problem.

Dual Formulation
We begin by establishing some notation. Let N x,a {x} ∪ {x ∈ S : p(x, x , a) > 0} denote the set of states that can be reached starting at a state x ∈ S given an action a ∈ A. For any states x, x ∈ S and action a ∈ A, define q x,x ,a 1 {x=x } − αp (x, x , a). Now, define the symmetric positive semi-definite matrix Q ∈ R (Ŝ×A)×(Ŝ×A) according to y∈Nx,a y ∈N x ,a q x,y,a q x ,y ,a y, y , the vector R ∈ RŜ ×A according to and the scalar S as Notice that Q, R and S depend only on inner products in X (and other easily computable quantities). The dual to (6) is then given by: Assuming that Q, R and S can be easily computed, this quadratic program, is tractable -its size is polynomial in the number of sampled states. We may recover a primal solution (i.e., the weight vector z * ) from an optimal dual solution: Proposition 1. Programs (6) and (9) have equal (finite) optimal values. The optimal solution to (9) is attained at some λ * . The optimal solution to (6) is attained at some (z * , s * , b * ) with The proof of Proposition 1 is presented in Appendix A. Having solved this program, we may, using Proposition 1, recover our approximate cost-to-go functionJ( A policy greedy with respect toJ is not affected by constant translations, hence in (11), the value of b * can be set to be zero arbitrarily. Again note that given λ * , evaluation ofJ only involves inner products in X .

Kernels
As pointed out earlier, the sampled RSALP is potentially difficult to work with. Proposition 1 establishes that solving this program (via its dual) is a computation that scales polynomially in N so that it can be solved efficiently provided inner products in H can be evaluated cheaply.
Alternatively, we may have arrived at a similar conclusion by observing that in any optimal solution to (6), we must have that z * ∈ span x : x ∈Ŝ ∪ N (Ŝ) , where N (Ŝ) denotes the set of states that can be reached from the sampled states ofŜ in a single transition. Then, one can restrict the feasible region of (6) to this subspace. In either approach, we observe that one need not necessarily have explicitly characterized the feature map Φ(·); knowing Φ(x), Φ(y) for all x, y ∈ S would suffice and so the algorithm designer can focus on simply specifying these inner products. We will see that when S ⊂ R n these inner products are effectively cheap to compute for practically interesting choices of Φ and H. This leads us to what is popularly referred to as the 'kernel trick' which we discuss next without the assumption that S is necessarily a finite set.
A kernel is a map K : S × S → R; we will call such a kernel positive definite if for any finite is symmetric and positive semi-definite. Given such a kernel, we are assured of the existence of a Hilbert space H and a map Φ : S → H such that 4 Φ(x), Φ(y) = K(x, y). The kernel trick then allows us to implicitly work with this space H by replacing inner products of the form (7), (8), and (11) by K(x, y). Of course, the quality of the approximation one may produce depends on H and consequently on the kernel employed.
One case of special interest is where S is finite and the Gram matrix corresponding to this set is positive definite. A kernel satisfying this condition is known as full-dimensional or full-rank. In this case, we may take H to be the |S|-dimensional Euclidean space. Working with such a kernel would imply working with an approximation architecture where any function J : S → R can be exactly expressed in the form J(x) = Φ(x), z , for some weight vector z ∈ H.
There are a number of kernels one can specify on the state space; we give two examples here for the case where S ⊂ R n . The polynomial kernel is defined according to A corresponding Hilbert space H is given by the span of all monomials of degree up to d on R n .
A Gaussian radial basis kernel is defined according to The Gaussian kernel is known to be full-dimensional (see, e.g., Theorem 2.18, Scholkopf and Smola, 2001), so that employing such a kernel in our setting would correspond to working with an infinite dimensional approximation architecture. A thorough exposition on this topic, along with many more examples can be found in the text of Scholkopf and Smola (2001).

Overall Procedure
Our development thus far suggests the following non-parametric algorithm that requires as input a kernel K, a state sampling distribution, π, and a distribution that allows us to weight approximation across states, ν. Recall that we defined by w the likelihood ration between ν and π; Lastly recall that N x,a denotes the set of states reachable with positive probability in one step from state x, having taken action a. Our procedure requires that we set the parameters κ (which we recall penalizes the one sided Bellman error across states) and Γ (which regularizes the set of approximations allowed). Finally, we must specify the number of samples N .
1. Sample N states from a distribution π.
2. Solve (9) with and The value of S may be set to be zero.
3. Let λ * be the dual optimal. Define an approximate value function bỹ In the next section we will develop theory to characterize the sample complexity of our procedure and the approximation quality it provides. This theory will highlight the roles of the kernel K and the sampling distributions used.

Overview
Recall that we are employing an approximationJ z,b of the form parameterized by the weight vector z and the offset parameter b. For the purpose of establishing theoretical guarantees, in this section, we will look at the following variation of the RSALP of (5): Here, rather than regularizing by penalizing according to the weight vector z in the objective as in (5), we regularize by restricting the size of the weight vector as a constraint. This regularization constraint is parameterized by the scalar C ≥ 0. It is easy to see that (15) is equivalent to the original problem (5), in the following sense: for any Γ > 0, there exists a C ≥ 0 so that for all B sufficiently large, (5) and (15) have the same optimal solutions and value. 5 Let C(Γ) be any such value of C, corresponding to a particular Γ.

Now let
The best approximation to J * within this set has ∞ -approximation error Now observe that if span {x : x ∈ S} ⊂ H has dimension |S| (i.e., if the kernel is full-dimensional), there exists az ∈ H satisfyingJz ,0 = J * . Consequently, for C(Γ) sufficiently large and any value of B ≥ 0, we have thatz ∈ C C(Γ), B -the approximation error in (16) reduces to zero. More generally, as C(Γ) and B grow large, we expect the approximation error in (16) to monotonically decrease; the precise rate of this decrease will depend, of course, on the feature map Φ, which in turn will depend on the kernel we employ.
Loosely speaking, in this section, we will demonstrate that: 1. For any given C(Γ) and B, exact solution of the RSALP (15) will produce an approximation with ∞ -error comparable to the optimal ∞ -error possible for approximationsJ z,b with 2. Under a certain set of idealized assumptions, solving a sampled variation of the RSALP (15) with a number of samples N C(Γ), B that scales gracefully with C(Γ), B, and other natural parameters, produces a near optimal solution to the RSALP with high probability.
These sample complexity bounds will not depend on the dimension of the approximation architecture H.
Since C(Γ) and B are parameters of the algorithm designer's choosing, these results will effectively show that with the ability to use a larger number of samples, the designer may choose larger values of C(Γ) and B and consequently produce approximations of improving quality. Under mild assumptions on the kernel, the approximation error can be made arbitrarily small in this fashion.

Preliminaries and an Idealized Program
Before proceeding with our analysis, we introduce some preliminary notation and define an idealized sampling distribution. For a vector x ∈ R n , and a 'weight' vector v ∈ R n + , we denote by We next define an idealized sampling distribution. Let ν be an arbitrary distribution over S and denote by µ * and P µ * an optimal policy and the transition probability matrix corresponding to this policy, respectively. We define a distribution over S, π µ * ,ν according to This idealized distribution will play the role of the sampling distribution π in the sequel.
As before, letŜ be a set of N states drawn independently at random from S; here we pick a specific sampling distribution π = π µ * ,ν . Given the definition ofJ z,b in (4), the 'idealized' sampled program we consider is: Here, T is the Bellman operator defined in (3). This program is a sampled variant of (15) and is closely related to the sampled RSALP (6) introduced in the last section. Before proceeding with an analysis of the quality of approximation afforded by solving this program, we discuss its connection with the sampled RSALP (6) of the previous section: 1. The distributions ν and π: We allow for an arbitrary distribution ν, but given this distribution require that π = π µ * ,ν . In particular, the distribution ν might be chosen as the empirical distribution corresponding to N independent draws from S under some measure; one would then draw N independent samples under π = π µ * ,ν to construct the second term in the objective. In a sense, this is the only 'serious' idealized assumption we make here. Given the broad nature of the class of problems considered it is hard to expect meaningful results without an assumption of this sort, and indeed much of the antecedent literature considering parametric LP based approaches makes such an assumption. When the sampling distribution π is a close approximation to π µ * ,ν (say, the likelihood ratio between the two distributions is bounded), then it is possible to provide natural extensions to our results that account for how close the sampling distribution used is to the idealized sampling distribution.
2. Regularization: We regularize the weight vector z with an explicit constraint on z H ; this permits a transparent analysis and is equivalent to placing the 'soft' regularization term Γ 2 z 2 H in the objective. In particular, the notation C(Γ) makes this equivalence explicit. In addition, we place a separate upper bound on the offset term B. This constraint is not

The Approximation Guarantee
Let (ẑ,b) be an optimal solution to the idealized sampled program (18) and let K max x∈S x H . Further define, Notice that Ξ(C, B, K, δ) 2 scales as the square of C, K, and B and further is O(ln 1/δ). The following result will constitute our main approximation guarantee: is an optimal solution to (18), then with probability at least 1 − δ − δ 4 , The result is proved Appendix B. There we prove a more refined result, replacing the maxnorm in the bound above with an 'optimally weighted' variant which can provide a substantially stronger guarantee (Desai et al., 2012). The remainder of this section is dedicated to parsing and interpreting this guarantee: 1. Optimal approximation error: Ignoring the -dependent error term in the guarantee, we see that the quality of approximation provided by (ẑ,b) is essentially within a constant multiple (at most (3+α)/(1−α)) of the optimal (in the sense of ∞ -error) approximation to J * possible using a weight vector z and offsets b permitted by the regularization constraints. This is a 'structural' error term that will persist even if one were permitted to draw an arbitrarily large 3. A non-parametric interpretation: As we have argued earlier, in the event that span {x : x ∈ S} is full-dimensional, there exists a choice of C and B for which the optimal approximation error is, in fact, zero. A large number of kernels would guarantee such feature maps. More generally, as C and B grow large, inf z H ≤C, |b|≤B J * −J z,b ∞ will decrease. In order to maintain the (bound on) sampling error constant, one would then need to increase N at a rate that is roughly Ω((CK + B) 2 ). In summary, by increasing the number of samples in the sampled program, we can (by increasing C and B appropriately) hope to compute approximations of increasing quality, approaching an exact approximation. While the rate at which the 'structural' error term goes to zero with B remains unclear, this is a substantially weaker requirement than choosing a low-dimensional approximation architecture that provides a small approximation error.

Numerical Scheme
This section outlines an efficient numerical scheme we use to solve the regularized SALP. In particular, we would like to solve the sampled dual problem (9), introduced in Section 2.3, in order to find an approximation to the optimal cost-to-go function. This approach requires solving a These computationally expensive steps may prevent scaling up solution of the QP to a large number of samples. Also, an off-the-shelf QP solver will typically store the matrix Q in memory, so that the memory required to solve our QP with an off-the-shelf solver effectively scales like O(N 2 A 2 ).
In this section, we develop an iterative scheme to solve the program (9) that, by exploiting problem structure, enjoys low computational complexity per iteration and attractive memory requirements. Our scheme is an active set method in the vein of the approaches used by Osuna et al. (1997) and Joachims (1999), among others, for solving large SVM problems. The broad steps of the scheme are as follows: 1. At the tth iteration, a subset B ⊂Ŝ × A of the decision variables of (9) -the 'active set' -is chosen. Only variables in this set may be changed in a given iteration; these changes must preserve feasibility of the new solution that results. Our algorithm will limit the size of the active set to two variables, i.e., |B| = 2. The methodology for choosing this active set is crucial and will be described in the sequel.
2. Given the subset B, we solve (9) for λ (t) , where all variables except those in B must remain unchanged. In other words, λ for all (x, a) / ∈ B. This entails the solution of a QP with only |B| decision variables. In our case, we will be able to solve this problem in closed form.
3. Finally, if the prior step does not result in a decrease in objective value we conclude that we are at an optimal solution; Proposition 2 establishes that this is, in fact, a correct termination criterion.
In the following section, we will establish an approach for selecting the active set at each iteration and show how Step 2 above can be solved while maintaining feasibility at each iteration.
We will establish that steps one and two together need no more that O(N A log N A) arithmetic operations and comparisons, and moreover that the memory requirement for our procedure scales like O(N A). Finally, we will establish that our termination criterion is correct: in particular, if no descent direction of cardinality at most two exists at a given feasible point, we must be at an optimal solution.

Subset Selection
The first step in the active set method is to choose the subset B ⊂Ŝ × A of decision variables to optimize over. Given the convex objective in (9), if the prior iteration of the algorithm is at a sub-optimal point λ λ (t−1) , then there exists a direction d ∈ RŜ ×A such that λ + d is feasible with a lower objective value for > 0 sufficiently small. To select a subset to optimize over, we look for such a descent direction d of low cardinality d 0 0 ≤ q, i.e., a vector d that is zero on all but at most q components. If such a direction can be found, then we can use the set of non-zero indices of d as our set B.
This problem of finding a 'sparse' descent direction d can be posed as Here, h(λ) Qλ + R is the gradient of the objective of (9) at a feasible point λ, thus the objective h(λ) d seeks to find a direction d of steepest descent. The first three constraints ensure that d is a feasible direction. The constraint d 0 ≤ q is added to ensure that the direction is of cardinality at most q. Finally, the constraint d ∞ ≤ 1 is added to ensure that the program is bounded, and may be viewed as normalizing the scale of the direction d.
The program (20) is, in general, a challenging mixed integer program because of the cardinality constraint. Joachims (1999) discusses an algorithm to solve a similar problem of finding a low cardinality descent direction in an SVM classification setting. Their problem can be easily solved provided that the cardinality q is even, however no such solution seems to exist for our case.
However, in our case, when q = 2, there is a tractable way to solve (20). We will restrict attention to this special case, i.e., consider only descent directions of cardinality two. In Section D, we will establish that this is, in fact, sufficient: if the prior iterate λ is sub-optimal, then there will exist a direction of descent of cardinality two.
To begin, define the sets Consider the following procedure: 1. Sort the set of indicesŜ×A according to their corresponding component values in the gradient vector h(λ). Call this sorted list L 1 .
2. Denote by (x 1 , a 1 ) the largest element of L 1 such that (x 1 , a 1 ) / ∈ P 1 , and denote by (x 2 , a 2 ) the smallest element of L 1 such that x 2 / ∈ P 2 . Add the tuple (x 1 , a 1 , x 2 , a 2 ) to the list L 2 .
3. Consider all x ∈ P 2 . For each such x, denote by (x, a 1 ) the largest element of L 1 such that (x, a 1 ) / ∈ P 1 , and denote by (x, a 2 ) the smallest element of L 1 . Add each tuple (x, a 1 , x, a 2 ) to L 2 .
4. Choose the element (x * 1 , a * 1 , x * 2 , a * 2 ) of L 2 that optimizes Set d x * 1 ,a * 1 = −1, d x * 2 ,a * 2 = 1, and all other components of d to zero.. This procedure finds a direction of maximum descent of cardinality two by examining candidate index pairs (x 1 , a 1 , x 2 , a 2 ) for which h(λ) x 2 ,a 1 − h(λ) x 1 ,a 1 is minimal. Instead of considering all N 2 A 2 such pairs, the routine selectively checks only pairs which describe feasible directions with respect to the constraints of (20).
Step 2 considers all feasible pairs with different states x, while Step 3 considers all pairs with the same state. It is thus easy to see that the output of this procedure is an optimal solution to (20), i.e., a direction of steepest descent of cardinality two. Further, if the value of the minimal objective (21) determined by this procedure is non-negative, then no descent direction of cardinality two exists, and the algorithm terminates.
In terms of computational complexity, this subset selection step requires us to first compute the gradient of the objective function h(λ) Qλ + R. If the gradient is known at the first iteration,

QP Sub-problem
Given a prior iterate λ (t−1) , and a subset B {(x 1 , a 1 ), (x 2 , a 2 )} of decision variable components of cardinality two as computed in Section 4.1, we have the restricted optimization problem This sub-problem has small dimension. In fact, the equality constraint implies that λ x 1 ,a 1 + λ x 2 ,a 2 is constant, hence, the problem is in fact a one-dimensional QP that can be solved in closed form.
Further, to construct this QP, two columns of Q are required to be generated. This requires computation effort of order O(N A).
Note that the subset B is chosen so that it is guaranteed to contain a descent direction, according to the procedure in Section 4.1. Then, the solution of (20) will produce an iterate λ (t) that is feasible for the original problem (22) and has lower objective value than the prior iterate λ (t−1) .
Finally, we state a proposition establishing the correctness of our active set method: if the prior iterate λ λ (t−1) is sub-optimal, then there must exist a direction of descent of cardinality two.
Our iterative procedure will therefore improve the solution, and will only terminate when global optimality is achieved.
Proposition 2. If λ ∈ RŜ ×A is feasible but sub-optimal for (9), then there exists a descent direction of cardinality two.
This result is proved in Appendix D.

Case Study: A Queueing Network
This section considers the problem of controlling the so-called 'Rybko-Stolyar' queuing network illustrated in Figure 1, with the objective of minimizing long run average delay. There are two 'flows' in this network: the first through server 1 followed by server 2 (with buffering at queues 1 and 2, respectively), and the second through server 2 followed by server 1 (with buffering at queues 4 and 3, respectively). Here, all inter-arrival and service times are exponential with rate parameters summarized in Figure 1.
This specific network has been studied by de Farias and Van Roy (2003); Chen and Meyn (1998); Kumar and Seidman (1990), for example, and closely related networks have been studied by Harrison and Wein (1989); Kushner and Martins (1996);  Muthuraman (2004). It is widely considered to be a challenging control problem. As such, a lot of thought has been invested in designing scheduling policies with networks of this type in mind.
Our goal in this section will be two fold. First, we will show that the RSALP, used 'out-of-the-box' with a generic kernel, can match or surpass the performance of tailor made heuristics and state of the art parametric ADP methods. Second, we will show that the RSALP can be solved efficiently.

MDP Formulation
Although the control problem at hand is nominally a continuous time problem, it is routinely converted into a discrete time problem via a standard uniformization device; see Moallemi et al. (2008), for instance, for an explicit example. In the equivalent discrete time problem, at most a single event can occur in a given epoch, corresponding either to the arrival of a job at queues 1 or 4, or the arrival of a service token for one of the four queues with probability proportional to the corresponding rates. The state of the system is described by the number of jobs in each of the four queues, so that S Z 4 + , whereas the action space A consists of four potential actions each corresponding to a matching between servers and queues. We take the single period cost as the total number of jobs in the system, so that g x,a = x 1 . Finally, we take α 0.9 as our discount factor.

Approaches
The following scheduling approaches were considered for the queueing problem: RSALP (this paper). We solve (9) using the active set method outlined in Section 4, taking as our kernel the standard Gaussian kernel K(x, y) exp − x − y 2 2 /h , with the bandwidth parameter 6 h 100. Note that this implicitly corresponds to an full-dimensional basis function architecture.
Since the idealized sampling distribution, π µ * ,ν is unavailable to us, we use in its place the geometric distribution with the sampling parameter ζ set at 0.9. This choice mimics that of de Farias and Van Roy (2003).
The regularization parameter Γ was chosen via a line-search; 7 we report results for Γ 10 −6 . In accordance with the theory we set the constraint violation parameter κ 2/(1 − α), as suggested by the analysis of Section 3, as well as by Desai et al. (2012) for the SALP.
SALP (Desai et al., 2012). The SALP formulation (2), is, as discussed earlier, the parametric counterpart to the RSALP. It may be viewed as a generalization of the ALP approach proposed by de Farias and Van Roy (2003) and has been demonstrated to provide substantial performance benefits relative to the ALP approach. Our choice of parameters for the SALP mirrors those for the RSALP to the extent possible, so as to allow for an 'apples-to-apples' comparison. Thus, as earlier, we solve the sample average approximation of this program using the same sampling distribution π as in (23), and we set κ 2/(1 − α). Being a parametric approach, one needs to specify an appropriate approximation architecture. Approximation architectures that span (at least quadratic) polynomials are known to work well for queueing problem. We use the basis functions used by de Farias and Van Roy (2003) for a similar problem modeled directly in discrete time. In particular, we use all monomials with degree at most 3, which we will call the cubic basis, as our approximation architectures.
Longest Queue First (generic). This is a simple heuristic approach: at any given time, a server chooses to work on the longest queue from among those it can service.

Max-Weight (Tassiulas and Ephremides, 1992). Max-Weight is a well known scheduling heuristic
for queueing networks. The policy is obtained as the greedy policy with respect to a value function approximation of the formJ given a parameter > 0. This policy has been extensively studied and shown to have a number of good properties, for example, being throughput optimal (Dai and Lin, 2005) and offering good performance for critically loaded settings (Stolyar, 2004). Via a line-search we chose to use 1.5 as the exponent for our experiments.

Results
Policies were evaluated using a common set of arrival process sample paths. The performance metric we report for each control policy is the long run average number of jobs in the system under where we set T 10,000. We further average this random quantity over an ensemble of 300 sample paths. The performance measure we report here is the average cost associated with a particular policy; the RSALP is optimizing over the discounted infinite horizon cost.
Further, in order to generate SALP and RSALP policies, state sampling is required. To understand the effect of the sample size on the resulting policy performance, the different sample sizes listed in Table 1 were used. Since the policies generated involve randomness to the sampled states, we further average performance over 10 sets of sampled states. The results are reported in Table 1 and have the following salient features: 1. RSALP outperforms established policies: Approaches such as the Max-Weight or 'parametric' ADP with basis spanning polynomials have been previously shown to work well for the problem of interest. We see that the RSALP with more than 3000 samples achieves performance that is superior to these extant schemes.
2. Sampling improves performance: This is expected from the theory in Section 3. Ideally, as the sample size is increased one should relax the regularization. However, for our experiments we noticed that the performance is quite insensitive to the parameter Γ. Nonetheless, it is clear that larger sample sets yield a significant performance improvement.
3. RSALP is less sensitive to state sampling: We notice from the standard deviation values in Table 1 that our approach gives policies whose performance varies significantly less across different sample sets of the same size.
In Figure 2, we see how performance varies for the RSALP method, as a function of the discount factor α and the regularization parameter Γ.
First, consider the choice of regularizer Γ. From Figure 2, we see that performance degrades for large choices of the parameter (presumably since this restricts the richness of the approximation architecture) as well as for very small choices of the parameter (presumably due to poor generalization). While there is degradation in performance for small values of Γ, this degradation is meaningful but limited in our experiments (average cost is ∼ 20% worse for small choice than the optimal choice). Now, consider the choice of discount factor α. Note that we are concerned with (and report) average cost performance, but solve a discounted problem formulation. While this may appear to be inconsistent at first glance, note that the average cost criterion can be seen as a weighted average, across states, of the discounted cost incurred starting at those states. The weights in this Moreover, we view the choice of α as an algorithmic tuning parameter that implicitly regularizes a bias-variance tradeoff in the learning algorithm. Specifically, if α is too low, there is 'bias' introduced through the misalignment of discounted and average cost objectives. On the other hand, if α is too large, the sample complexity degrades and many more samples are needed. This is consistent with the approximation guarantees we have developed, and also consistent with the results in Figure 2: values of α closer to 1 result in degraded average cost performance. This is in keeping with the view of the discount factor as a regularizer in the broader reinforcement learning literature (e.g., Dewanto and Gallagher, 2021;Jiang et al., 2015;Amit et al., 2020).
In summary, we view these results as indicative of the possibility that the RSALP may serve as a practical and viable alternative to state-of-the-art parametric ADP techniques.

Conclusions
This paper set out to present a practical non-parametric algorithm for approximate dynamic programming building upon the success of linear programming based methods that require the user to specify an approximation architecture. We believe that the RSALP, presented and studied here, is such an algorithm. In particular, the theoretical guarantees presented here establish the 'nonparametric' nature of the algorithm by showing that increased sampling effort leads to improved approximations. On the empirical front, we have shown that our essentially 'generic' procedure was able to match the performance of tailor made heuristics as well as ADP approaches using pre-specified approximation architectures. Nevertheless, several interesting directions for future work are evident at this juncture: • The choice of kernel: The choice of kernel matters in so much as the feature map it encodes allows for approximations with small Hilbert norm (i.e., small C). Thus, a good choice of kernel would require fewer samples to achieve a fixed approximation accuracy than a poor choice of kernel. That said, picking a useful kernel is an apparently easier task than picking a low-dimensional architecture -there are many full-dimensional kernels possible, and with sufficient sampling, they will achieve arbitrarily good value function approximations.
Nevertheless, it would be valuable to understand the interplay between the choice of kernel and the corresponding value of C for specific problem classes (asking for anything more general appears rather difficult).
• The active set method: In future work, we would like to fine tune/ build more extensible software implementing our active set method for wider use. Given the generic nature of the

A. Duality of the Sampled RSALP
Proof of Proposition 1. We begin with a few observations about the primal program, (6): 1. Because the objective function is coercive, 8 weight vector z can be restricted without loss of generality to some finite ball in H. The optimal value of the primal is consequently finite.
2. The primal has a feasible interior point: consider setting for some > 0.
3. The optimal value of the primal is achieved. To see this, we note that it suffices to restrict z to the finite dimensional space spanned by the vectors x : x ∈Ŝ ∪N (Ŝ) , where N (Ŝ) denotes the set of states that can be reached from the sampled states ofŜ in a single transition. Then, the feasible region of the primal can be restricted, without loss of optimality, to a compact subset of H × RŜ × R. Since the objective function of the primal is continuous, we know that its optimal value must be achieved by the Weierstrass theorem.
We next derive the dual to (6). As in Luenberger (1997, Chapter 8), we define the Lagrangian: 8 As z H → ∞, the objective value goes to −∞. and define the dual function G(λ) inf (z,b,s)∈D L(z, b, s, λ) where we denote by D the feasible region of the primal problem. Now, observe that for any given λ, L(z, b, s, λ) is (uniquely) minimized at for any finite b, s. This follows from the fact that z can be optimizied separately from (b, s) since there are no croess terms, and the observation that for any z ∈ H, z, z − z, z is minimized at It follows immediately that on the set defining the feasible region of the program (9), we must have that and moreover that G(λ) = +∞ outside that set. This suffices to establish that the dual problem inf λ≥0 G(λ) is precisely program (9).
The first conclusion of Luenberger (1997, Theorem 1, pp. 224-225) and the first and second observations we made at the outset of our proof then suffice to establish that programs (6) and (9) have equal optimal values (i.e. strong duality holds) and that the optimal value of the dual program is achieved at some λ * . Our third observation, (24), and the second conclusion of Luenberger (1997, Theorem 1, pp. 224-225) then suffice to establish our second claim.

B. Proof of Theorem 1
Here we state and prove a refined version of Theorem 1; specifically Theorem 1 is implied by Theorem 2 below. First, however, we introduce a certain 'weighted' max norm, and the notion of a Lyapunov function: Let Ψ {ψ ∈ R S : ψ ≥ 1}, be the set of all functions on the state space bounded from below by 1. For any ψ ∈ Ψ, let us define the weighted ∞-norm · ∞,1/ψ by Our use of such weighted norms will allow us to emphasize approximation error differently across the state space. Further, we define For a given ψ, β(ψ) gives us the worst-case expected gain of the Lyapunov function ψ for any state action pair (x, a).
Recall that we let (ẑ,b) be an optimal solution to the idealized sampled program (18) and let K max x∈S x H . Further we defined, The following result will constitute our main approximation guarantee: is an optimal solution to (18), then with probability at least 1 − δ − δ 4 , Note that this result implies Theorem 1 by choosing Ψ to be the vector of all ones. The proof will broadly proceed in two steps. First we prove a uniform concentration guarantee that allows us to show that the one sided expected Bellman error minimized by the RSALP is well approximated in its sampled counterpart. The main part of the proof then essentially leverages the structure of the feasible region of the RSALP, and properties of the Bellman operator to establish the result.

B.1. Uniform Concentration Bounds
We begin with defining the empirical Rademacher complexity of a class of functions F from S to R asR where σ i are i.i.d. random variable that take value 1 with probability 1/2 and −1 with probability 1/2. The X i are i.i.d. S-valued random variables drawn with the distribution π. We denote by R n (F) ER n (F) the Rademacher complexity of F.
We begin with the following abstract sample complexity result: let F be a class of functions mapping S to R that are uniformly bounded by some constant B. Moreover denote for any function f ∈ F, the empirical expectationÊ n f (X) where the X i are i.i.d. random draws from S as above. We then have the following sample complexity result: This result is standard 9 ; for completeness, the proof may be found in Appendix C. Next, we establish the Rademacher complexity of a specific class of functions. Fixing a policy µ, consider then the set of functions mapping S to R defined according to: We have: Lemma 2. For any policy µ, Proof. Observe that, due to triangle inequality for all x ∈ S. Now, let X i be i.i.d. samples in S and X i be the corresponding elements in H, Now, consider the class of functions mapping S to R, defined according to: It is easy to show that R n (F B ) ≤ 2B/ √ n, so that with the previous lemma, the results of Bartlett and Mendelson (2002, Theorem 12, parts 4 and 5) allow us to conclude 9 Indeed, the calculations in Lemmas 1 and 2 are routine; see Bartlett and Mendelson (2002) Corollary 1.
We have: Lemma 3. For every f ∈ F S and every f ∈ F S,µ we have that f ∞ ≤ C/2. Moreover, Proof. For the first claim, observe that For the second claim, Hoeffding's inequality applied now to the sum of independent random vari- Corollary 1, Lemma 1, and the first part of Lemma 3 yields the following sample complexity result: Proof. By the first part of Lemma 3, we have for any f ∈F S,µ that f ∞ ≤ C/2. Recall also that by Corollary 1, R n (F S,µ ) ≤ C √ n . Consequently, by Lemma 1, This completes the proof.
Theorem 3 will constitute a crucial sample complexity bound for our main result; we now proceed with the proof of Theorem 1.

B.2. Proof of Theorem 1
Let (ẑ,b,ŝ) be the optimal solution to the sampled program (18). Definê Observe that we may assume, without loss of generality, that where ∆ * = (I − αP µ * ) −1 . Letπ µ * ,ν be the empirical distribution obtained by sampling N states according to π µ * ,ν . Now let z, b satisfying z H ≤ C, |b| ≤ B be given, and define s z,b (TJ z,b − J z,b ) + . Then, (z, b, s z,b ) constitute a feasible solution to (18). Finally, let ψ ∈ Ψ be arbitrary. We in the proof of Theorem 2 could instead be replaced by the expression ν J * −J z,b + 2 1 − απ approx s z,b + 2 1 − α + 2 1 − α (π µ * ,ν −π approx ) ŝ ≤ ν J * −J z,b + 2 1 − απ approx s z,b + 2 1 − α + 1 1 − α TV(π µ * ,ν ,π approx ) ŝ ∞ where TV(·, ·) is the total variation distance between distributions. Noting that we may minimize this expression over (z, b), the first three terms would yield an upper bound identical to that in the current proof with π µ * ,ν replaced instead by π approx . The last term represents the additional error introduced by non-idealized sampling. Whereas it is clear that this term goes to zero as π approx approaches π µ * ,ν , we can in general only get crude control on this rate since an upper bound on ŝ ∞ will in general depend on the diameter of the feasible region.
Alternatively if instead we had finer control on the approximation π approx provides to π µ * ,ν (say, π µ * ,ν (s)/π approx (s) ≤ K + 1 for all s) then the additional error term permits a computable bound, since we would have (through an appropriate coupling of the the sample under π µ * ,ν and π approx ) that (π µ * ,ν −π approx ) ŝ ≤ Kπ approxŝ , w.h.p., where the latter quantity is directly computable from the optimal solution to the RSALP.
McDiarmid's inequality (or equivalently, Azuma's inequality) then implies: Define T First, consider the case of T = ∅. In this case, for all x such that a∈A λ x,a = κ/N , we have d x,a ≤ 0. Since d is a descent direction, we have g d < 0. Lemma 4 can be applied to get a pair (x 1 , a 1 ) and (x 2 , a 2 ) such that d x 1 ,a 1 > 0 and d x 2 ,a 2 < 0, with g x 1 ,a 1 < g x 2 ,a 2 . Further x 1 is such that a∈A λ x 1 ,a < κ/N and (x 2 , a 2 ) is such that λ x 2 ,a 2 > 0. These conditions ensure that if (x 1 , a 1 ) and (x 2 , a 2 ) are increased and decreased respectively in the same amount, then the objective is decreased. And hence we obtain a descent direction of cardinality 2. By assuming that T = ∅, we have avoided the corner case of not being able to increase d x 1 ,a 1 due to a∈A λ x 1 ,a = κ/N .
For T = ∅, without loss of generality, assume that |P x | = 2 for each x ∈ T , i.e., d has exactly two non-zero components corresponding to the state x. This is justified at the end of the proof.
Denote these indices by (x, a + ) and (x, a − ), so that d x,a + > 0 and d x,a − < 0. From the first constraint of (20), we must have that d x,a + ≤ (d x,a − ) − .
There are two cases: (i) Suppose g x,a + < g x,a − , for some x ∈ T .
Then, we can define a descent directiond ∈ RŜ ×A of cardinality two by settingd x,a + = 1, d x,a − = −1, and all other components ofd to zero.
(ii) Suppose that g x,a + ≥ g x,a − , for all x ∈ T .
For all x ∈ T , defined x (d x,a − ) − − d x,a + ≥ 0. Then, the fact that x,a d x,a = 0 implies At the same time, for all x ∈ T , we have that Then, since d is a descent direction, we have that Applying Lemma 4 to (28) and (29), there must be a pair of indices (x 1 , a 1 ) and (x 2 , a 2 ) such thatd x 1 ,a 1 > 0,d x 2 ,a 2 < 0 and g x 1 ,a 1 < g x 2 ,a 2 . For such (x 1 , a 1 ) and (x 2 , a 2 ) we have a descent direction where we can increase λ x 1 ,a 1 and decrease λ x 2 ,a 2 by the same amount and get a decrease in the objective. Note that sinced x 1 ,a 1 > 0, we have that d x 1 ,a 1 > 0 and x 1 / ∈ T , hence a λ x 1 ,a < κ/N . Also, by construction, (x 2 , a 2 ) is chosen such that d x 2 ,a 2 < 0, implying that λ x 2 ,a 2 > 0. Thus the specified direction is also feasible, and we have a feasible descent direction of cardinality two.
Finally, to complete the proof, consider the case where there are some x ∈ T with |P x | > 2, i.e., d has more than two non-zero components corresponding to the state x. For each x ∈ T , define x d x,a if x ∈ T and (x, a) = (x, a 1 ), a ∈A − x d x,a if x ∈ T and (x, a) = (x, a 2 ), d x,a otherwise.
It is easy to verify thatd is also a feasible descent direction. Furthermore,d has only two non-zero components corresponding to each start x ∈ T .