Optimizing a variable-rate diffusion to hit an infinitesimal target at a set time

I consider a stochastic optimization problem for a time-changed Bessel process whose diffusion rate is constrained to be between two positive values $r_{1}<r_{2}$. The problem is to find an optimal adapted strategy for the choice of diffusion rate in order to maximize the chance of hitting an infinitesimal region around the origin at a set time in the future. More precisely, the parameter associated with"the chance of hitting the origin"is the exponent for a singularity induced at the origin of the final time probability density. I show that the optimal exponent solves a transcendental equation depending on the ratio $\frac{r_{2}}{r_{1}}$ and the dimension of the Bessel process.


Introduction
Pick y ∈ R and positive numbers r 1 , r 2 , T with r 1 < r 2 . For a Borel measurable function D : R × [0, T ] → [r 1 , r 2 ], let X t ∈ R be the weak solution to the one-dimensional stochastic differential equation dX t = D(X t , t)dB t , X 0 = y, t ∈ [0, T ] (1.1) for a standard Brownian motion B t . Weak solutions to diffusion equations of the form (1.1) exist and are unique by [4,Sect. 7.3]. In broad terms the question I address in this article is the following: What choice of diffusion coefficient maximizes the probability that X t lands in an infinitesimal region around the origin at the final time T given the constraint r 1 ≤ D(x, t) ≤ r 2 ? To form a more precise statement of the question, let us consider the probability densities P Maximizing the chance of landing in "an infinitesimal region around the origin" at time T does not merely mean maximizing the density P y,T (x) ∼ |x| −η around x = 0 for η > 0, then I(D) = η. In particular, if D(x, t) is constant, then P (D) y,T (x) is bounded and I(D) = 0. If we think of D(x, t) as the strategy of a random walker X t attempting to maximize his chance of arriving to the origin at time T , it is reasonable that he should rush with the maximum diffusion rate r 2 when he judges himself to be far given the time remaining, and he should choose to bide his time with the minimum diffusion rate r 1 when he judges himself to be close. Thus it natural to have D(x, t) ր r 2 as |x| ր ∞ for all t ∈ [0, T ). Since X t is a time-changed Brownian motion with diffusion rates restricted to [r 1 , r 2 ], our optimization problem inherits a Brownian scale invariance when viewed from the origin and the final time T ∈ R + ; the random walker should make the same choice of diffusion rate at space-time points (x, t) and ( Any strategy D(x, t) consistent with the above scale invariance should satisfy For the above reason, I will focus my analysis on diffusion coefficients of the form D( . It is also natural to assume that R is symmetric about zero. Theorem 1.1 is the main result of this article. To state the result we need to define positive numbers κ and η solving the pair of equations (1.5), which depend on the diffusion bounds r 1 , r 2 through their ratio V := ( r 2 r 1 ) (1.5) I will denote the collection of symmetric, Borel measurable functions from R to [r 1 , r 2 ] such that lim z→∞ R(z) = r 2 by B(R, [r 1 , r 2 ]).
Theorem 1.1. Fix y ∈ R and positive numbers T , r 1 , r 2 with r 2 > r 1 . Let P The above maximum is attained uniquely for R * : R → [r 1 , r 2 ] of the form It is instructive to examine the limiting behavior of the exponent η(V ) and the cut-off parameter κ(V ) characterizing the optimal solution (1.5) in the respective limits V ց 1 and V ր ∞. One surprise is that for large V the optimal cut-off κ(V ) approaches the finite value κ ∈ R + solving the • The values η(V ), κ(V ) increase continuously with the parameter V ∈ (1, ∞).
The optimization problem described above focuses on maximizing the probability of certain vanishingly low chance events. In particular there is no penalty for landing far from the target region. It is a much different problem, for instance, to minimize a quantity of the form y,T is defined as in (1.2) and ϕ : R → R + is a convex function quantifying the penalty for landing away from the target point at the final time T . When D(x, t) is restricted to the range [r 1 , r 2 ], the optimal strategy for the penalty problem is simply to always use the lowest available diffusion rate r 1 . Discussion of optimization problems in stochastic settings can be found in [1,2].

The stationary dynamics
The restriction of the diffusion coefficient D(x, t) to the parabolic form R( x Through the transformation (2.1), we can use P . The reparameterized process is thus a diffusion with a repulsive drift that grows proportionally to the distance from the origin: where B ′ s is a copy of standard Brownian motion. The trajectories for the process Z t will undergo an essentially exponential divergence to infinity after wandering near the origin for a finite time period. The state of the original process X t at the final time T is recovered by the limit In the next two sections I will study the stationary dynamics more closely.

Analysis of the generators for the stationary dynamics
Let B(R, [r 1 , r 2 ]) denote the collection of Borel measurable functions from R to [r 1 , r 2 ]. For a given element R ∈ B(R, [r 1 , r 2 ]), consider the backwards Kolmogorov generator The operator L (R) is self-adjoint when acting on the weighted L 2 -space defined below. Let L 2 R, w(x)dx be the Hilbert space with inner product The corresponding norm is denoted by f 2,R := f |f R .
Proof. The first task is to verify that L (R) maps D into L 2 R, w(x)dx . Using integration by parts, I have the equality below for all smooth functions The inequality (3.3) simply uses that R(x) ≥ r 1 . The equality (3.2) extends to all elements in D and implies that is relatively bounded to (L (R) , D). I now focus on showing that d 2 d 2 x is also relatively bounded to L (R) . With the lower bound (3.3), it will be enough to demonstrate that there is a C > 0 such that It is convenient to split the integration over R into the domains |x| ≤ L and |x| > L for some L ≫ 1 to get the bound For the first term on the right side of (3.5), using integration by parts, Cauchy-Schwarz, and the inequality 2uv ≤ u 2 + v 2 yields the first inequality below for any c > 0: The second inequality of (3.6) follows from the relation w(x) ≥ r −1 2 . For the second term on the right side of (3.5), I have the inequalities The first inequality of (3.7) is Chebyshev's, and the second inequality is discussed below. By writing d 2 x f and expanding the left side of (3.2), I obtain the following inequality: The second inequality is by Cauchy-Schwarz and R(x) ≤ r 2 . Thus x d dx f 2,R is smaller than 2r 2 d 2 d 2 x f 2,R as required to get the second inequality of (3.7). By picking L ∈ R + with L 2 ≥ In the statement of the proposition below, I denote the maximum element in the spectrum of L (R) by Σ L (R) .
1. The operator L (R) has compact resolvent.
2. The eigenvalues for L (R) are negative. When R is increasing over the interval [0, ∞) and not strictly constant, then the largest eigenvalue Σ L (R) lies in the interval (− 1 2 , 0).
3. The eigenvalue Σ L (R) is non-degenerate, and the phase of the corresponding eigenfunction can be chosen so that the following properties hold: • The values φ(x) are strictly positive for all x ∈ R and φ(x) = φ(−x).
• The second derivative of φ is contained in L 2 R, w(x)dx and R(x) d 2 φ d 2 x (x) is continuous. • The function φ is strictly decreasing over the interval (0, ∞).
• The function φ has a unique inflection point c > 0 over the interval (0, ∞).
4. The following equality holds for any b ∈ R: . Notice that ℓ + , ℓ − are the fundamental solutions to the differential equation Also, define the functions c ± : . By the standard technique of pasting together the fundamental solutions, the Green function G : R 2 → R satisfying − (L (R) ) −1 f (x) = R dzG(x, z)f (z) can be written in the form There is a canonical isometry from L 2 R, w(x)dx to L 2 (R) given by the map sending f (x) to yields the Hilbert-Schmidt norm though the standard formula (L (R) ) −1 2 HS = R 2 dxdz G(x, z) 2 . However, the quantity R 2 dxdz G(x, z) 2 is finite given the form (3.8), and hence L (R) has compact resolvent.
Part (2) Note that φ ∈ L 2 R, w(x)dx ensures that φ 1 is finite. However, I have the following equality: happens to be non-negative for R symmetric and increasing over R + . For R symmetric and increasing over R + and not constant, there must be some regions of z ∈ R such that (3.10) is strictly greater than 2. Thus (3.9) must be > 2, and the largest eigenvalue for L (R) is in the interval (− 1 2 , 0). (2), the eigenfunction φ(x) with leading eigenvalue E := Σ L (R) < 0 must be strictly positive for all x ∈ R + . The function φ(x) is symmetric around the origin since R(x) is symmetric and otherwise φ(−x) would be a second eigenfunction for the non-degenerate eigenvalue Σ L (R) . By Lem. 3.1 d 2 d 2 x is relatively bounded to L (R) , and thus the eigenfunctions of L (R) lie in the domain of d 2 d 2 x . The continuity of R(x) d 2 φ d 2 x (x) follows from the equality

Part (3): As remarked in Part
is continuous and R(x) ≥ r 1 is bounded away from zero, an inflection point u for φ must satisfy d 2 φ d 2 x (u) = lim x→u d 2 φ d 2 x (x) = 0. From (3.11) we can see that the second derivative d 2 φ d 2 x (x) is negative in a region around the origin |x| < c, where c > 0 denotes the inflection point closest to the origin over the interval (0, ∞). An infection point must exist since φ(x) is positive, continuously differentiable, and decaying at infinity. By the symmetry of φ, dφ dx (x) is zero for x = 0. Since the derivative of dφ dx (x) is negative over the interval (0, c), we must have that dφ dx (x) is negative over the interval (0, c]. Suppose to reach a contradiction that there is some point u ∈ (c, ∞) such that either (3.12) I will let u denote the smallest such value. Notice that I can not have both dφ dx (u) = 0 and d 2 φ d 2 x (u) = 0 since the term −Eφ(x) in (3.11) is strictly positive. For the cases (3.12), the following reasoning applies: (i). If dφ dx (u) = 0, then the continuous function 1 2 R(x) d 2 φ d 2 x (x) must be positive over the interval [c, u]. This, however, contradicts equation (3.11) for x = u since the terms on the right side of (3.11) are both positive.
(ii). If d 2 φ d 2 x (u) = 0, then dφ dx (x) must be negative over the interval [c, u]. A linear approximation of equation (3.11) about the point u yields that Since dφ dx (u) and E are negative, it follows from (3.13) that u must be an inflection point at which the concavity changes from down to up. However, by my definitions, φ(x) is concave up over the interval (c, u), which brings me to a contradiction.
It follows that φ(x) is strictly decreasing over (0, ∞) and has exactly one inflection point over that interval.
Part (4): Using the backward representation of the dynamics, I have the equality where by assumption f, d 2 The function e sL (R) f can be written as where φ is the eigenfunction for L (R) corresponding to the leading eigenvalue E := Σ L (R) , as before.
Let E 1 be the largest eigenvalue following E. I will show that e sL (R) g decays uniformly with exponential rate −E 1 as s → ∞ over any compact interval [−L, L]. I have the following inequalities: e sL (R) g 2,R ≤ e sE 1 g 2,R , (3.14) where the second inequality holds for some C > 0. The first inequality in (3.14) uses that g lies in the orthogonal space to φ. For the second inequality in (3.14), recall from Lem. 3.1 that d 2 d 2 x and L (R) are mutually relative bounded so that I have the first and third inequalities below for some constants c, C > 0: . (3.16) The second inequality above follows since L (R) and e sL (R) commute and g, L (R) g are orthogonal to φ. Next I use (3.14) and 2 e sE 1 g 2,R + 2L The second inequality is by Jensen's inequality and R(x) ≤ r 2 . The last inequality in (3.18) follows from the relation d dx e sL (R) g 2,R ≤ r 2 √ r 1 d 2 d 2 x e sL (R) g 2,R , which can be seen from the equality (3.2). Finally, the last line of (3.18) decays on the order e sE 1 by (3.14) and (3.15).
In the special case in which the function R has the form R(x) = r 1 + (r 2 − r 1 )χ(|x| > c) for some c ∈ R + , I denote the corresponding generator by L c . Note that any L (R) with R symmetric and increasing from r 1 to r 2 over the interval [0, ∞) can be formally written as a convex combination of the operators L c through the formula L (R) = 1 Thus the operators L c are extremal in the class of operators L (R) corresponding to reasonable strategies R ∈ B(R, [r 1 .r 2 ]). Proof. The argument will proceed by examining certain perturbations of the operator L (R) . The perturbations that I consider will be of the form x for h ≪ 1 and a well-chosen bounded measurable function A : R → R. By Lem. 3.1 the operator d 2 d 2 x is relatively bounded to L (R) . It follows that operators of the form A(x) d 2 d 2 x are also relatively bounded to L (R) when A is bounded, and we can use standard perturbation theory [3] to characterize the leading eigenvalue of L (R+hA) for small h.
By Part (3) of Prop. 3.2, the eigenfunction φ with eigenvalue E := Σ L (R) must satisfy that for some c > 0. Define A : R → R to be of the form (3.20) Notice that R(x) + hA(x) maps into the interval [r 1 , r 2 ] for every h ∈ [0, 1]. For h ≪ 1 perturbation theory yields that Since the values φ(x) are strictly positive by Part (3) of Prop. 3.2, the property (3.19) implies that the expression φ A(x) d 2 d 2 x φ must be strictly positive unless A(x) = 0. However, A(x) = 0 implies that L (R) = L c .
The above analysis shows that any operator L (R) admits a small perturbation L (R+hA) , h ≪ 1 leading to a higher maximum eigenvalue unless R(x) is of the form r 1 + (r 2 − r 1 )χ(|x| ≥ c).

The extremal strategies
By Part (4) of Prop. 3.2, it is sufficient to focus attention on the extremal generators L c .
The maximizing value c ∈ R + is unique and given by c = κ(V )r (i). By Part (3) of Prop. 3.3, the values φ c (x) ∈ C have a single phase for all x ∈ R that can be chosen to be positive. The function φ c : R → R + satisfies the differential equation → y in the integrals defining L ± r 1 ,Ec (c) and L ± r 2 ,Ec (c), the above equations are equivalent to Finally, the above equations are equivalent to (1.5) since − dYη dx (x) = Y η+1 (x) + xY η (x) for x = κ and x = κ V and since where the second equality applies integration by parts.
2. In the limit V ց 1, 3. In the limit V ր ∞, see equation (4.4). Notice that Z η (x) is smooth, symmetric about zero, and decreases monotonically away from zero. From the form (4.5) it is clear that the inflection point κ of Z η increases with η and approaches 1 in the limit η ց 0. Hence κ(V ) is an increasing function with image contained in (1, ∞).