Unbiased estimation of the Hessian for partially observed diffusions

In this article, we consider the development of unbiased estimators of the Hessian, of the log-likelihood function with respect to parameters, for partially observed diffusion processes. These processes arise in numerous applications, where such diffusions require derivative information, either through the Jacobian or Hessian matrix. As time-discretizations of diffusions induce a bias, we provide an unbiased estimator of the Hessian. This is based on using Girsanov’s Theorem and randomization schemes developed through Mcleish (2011 Monte Carlo Methods Appl. 17, 301–315 (doi:10.1515/mcma.2011.013)) and Rhee & Glynn (2016 Op. Res. 63, 1026–1043). We demonstrate our developed estimator of the Hessian is unbiased, and one of finite variance. We numerically test and verify this by comparing the methodology here to that of a newly proposed particle filtering methodology. We test this on a range of diffusion models, which include different Ornstein–Uhlenbeck processes and the Fitzhugh–Nagumo model, arising in neuroscience.


Introduction
In many scientific disciplines, diffusion processes [1] are used to model and describe important phenomena. Particular applications where such processes arise include biological sciences, finance, signal processing and atmospheric sciences [2][3][4][5]. Mathematically, diffusion processes take the general form  where X t ∈ R d , θ ∈ Θ is a parameter, X 0 = x is the initial condition with x given, a : Θ × R d → R d denotes the drift term, σ : R d → R d×d denotes the diffusion coefficient and {W t } t≥0 is a standard d-dimensional Brownian motion. In practice, it is often difficult to have direct access to such continuous processes, where instead one has discrete-time partial observations of the process {X t } t≥0 , denoted as Y t 1 , . . . , Y t n , where 0 < t 1 < . . . < t n = T, such that Y t p ∈ R d y . Such processes are referred to as partially observed diffusion processes (PODPs), where one is interested in doing inference on the hidden process (1.1) given the observations. In order to do such inference, one must time-discretize such a process which induces a discretization bias. For (1.1), this can arise through common discretization forms such as an Euler or Milstein scheme [6]. Therefore, an important question, related to inference, is how one can reduce, or remove the discretization bias. Such a discussion motivates the development and implementation of unbiased estimators.

(a) Motivating example
To help motivate unbiased estimation for PODPs, we provide an interesting application, for which we will test in this work. Our model example is the Fitzhugh-Nagumo (FHN) model for a neuron, which is a second-order ODE model arising in neuroscience, describing the actional potential generation within a neuronal axon. We consider a stochastic version of it, which is represented as the following: t ) θ 2 X (1) t − X (2) t + θ 3 dt + σ 1 σ 2 dW t , X 0 = u 0 , (1.2) where X t is the membrane potential. There has been some interest in parameter estimation [7][8][9], related to various variants of the FHN model. This particular model is well known within the community, and commonly acts as a toy problem in the field of mathematical neurosciences. For this reason, we will use this example within our numerical experiments. In figure 1 provide a simulation of (1.2), for arbitrary choices of Θ = (θ 1 , θ 2 , θ 3 ), which demonstrates the interesting behaviour and dynamics. In the plot, we have also plotted non-noisy observations for X (1) t .

(b) Methodology
The unbiased estimation of PODPs has been an important, yet challenging topic. Some original seminal work on this has been the idea of exact simulation, proposed in various works [10][11][12].
The underlying idea behind exact simulation is that, through a particular transformation, one can acquire an unbiased estimator, subject to certain restrictions on the form of the diffusion and its dimension. Since then there have been a number of extensions aimed at going beyond this, w.r.t. to more general multidimensional diffusions and continuous-time dynamics [13,14]. However, attention has recently been paid to unbiased estimation, for Bayesian computation, through the work of Rhee and Glynn [15,16], in which they provide unbiased and finite variance estimators through introducing randomization. In particular, these methods allow us to unbiasedly estimate an expectation of a functional, by randomizing on the level of the time-discretization in a type of multilevel Monte Carlo (MLMC) approach [17], where there is a coupling between different levels. As a result, this methodology has been considered in the context of both filtering and Bayesian computation [18][19][20][21] and gradient estimation [22]. One advantage of this approach is that, with couplings, it is relatively simple to use & implement computationally, while exploiting such methodologies on a range of different model problems or set-ups.
In this work, we are interested in developing an unbiased estimator of the Hessian for PODPs. Typically, the Hessian is not required for PODPS but currently this is of interest as current stateof-the-art stochastic gradient methodologies exploit Hessian information for an improved speed of convergence. Methods using this information include Newton type methods [23,24], which have improved rates of convergence over first-order stochastic gradient methods. It is also well known that these methods are typically biased as one does not fully require the whole Hessian, due to the computational burden. Therefore, this provides our motivation for firstly developing an unbiased estimator, and secondly for the Hessian. In order to develop an unbiased estimator, our methodology will largely follow that described in [22], with the extension of this from the score function to the Hessian. Other works that consider unbiased estimation of the gradient include [25,26]. In particular, we will exploit the use of the conditional particle filter (CPF), first considered by Andrieu et al. [27,28]. We provide an expression for the Hessian of the likelihood, while introducing an Euler time-discretization of the diffusion process in order to implement our unbiased estimator. We then describe how one can attain unbiased estimators, which is based on various couplings of the CPF. From this, we test this methodology to that of using the methods of [18,29] for the Hessian computation, as for a comparison, where we demonstrate the unbiased estimator through both the variance and bias. This will be conducted on both a single and multidimensional Ornstein-Uhlenbeck (OU) process, as well as a more complicated form of the FHN model. We remark that our estimator of the hessian is unbiased, but if the inverse hessian is required, it is possible to adapt the forthcoming methodology to that context as well.

(c) Outline
In §2, we present our setting for our diffusion process. We also present a derived expression for the Hessian, with an appropriate time-discretization. Then, in §3, we describe our algorithm in detail for the unbiased estimator of the Hessian. This will be primarily based on a coupling of a coupled CPF. This will lead to §4 where we present our numerical experiments, which provide variance and bias plots. We compare the methodology of this work with that of the Delta particle filter. This comparison will be tested on a range of diffusion processes, which include an OU process and the FHN model. We summarize our findings in §5.

Model
In this section, we introduce our setting and notation regarding our partially observed diffusions. This will include a number of assumptions. We will then provide an expression for the Hessian of the likelihood function, with a time-discretization based on the Euler scheme. This will include a discussion on the stochastic model where we define the marginal likelihood. Finally, we present a result indicating the approximation of the Hessian computation as we take the limit of the discretization level.

(a) Notation
Let (X, X ) be a measurable space. For ϕ : X → R, we write B b (X) as the collection of bounded measurable functions, C j (X) are the collection of j-times, j ∈ N continuously differentiable functions and we omit the subscript j if the functions are simply continuous; if ϕ : We write ||ϕ|| Lip as the Lipschitz constant of a function ϕ ∈ Lip ||·|| 2 (R d ). For ϕ ∈ B b (X), we write the supremum norm ||ϕ|| = sup x∈X |ϕ(x)|. P(X) denotes the collection of probability measures on (X, X ). For a measure μ on (X, X ) and a ϕ ∈ B b (X), the notation μ(ϕ) = X ϕ(x)μ(dx) is used. B(R d ) denote the Borel sets on R d . dx is used to denote the Lebesgue measure. Let K : X × X → [0, ∞) be a non-negative operator and μ be a measure, then we use the notations μK(dy) = X μ(dx)K(x, dy) and for ϕ ∈ B b (X), K(ϕ)(x) = X ϕ(y)K(x, dy). For A ∈ X , the indicator is written I A (x). U A denotes the uniform distribution on the set A. N s (μ, Σ) (resp. ψ s (x; μ, Σ)) denotes an s-dimensional Gaussian distribution (density evaluated at x ∈ R s ) of mean μ and covariance Σ. If s = 1 we omit the subscript s. For a vector/matrix X, X * is used to denote the transpose of X. For A ∈ X , δ A (du) denotes the Dirac measure of A, and if A = {x} with x ∈ X, we write δ x (du). For a vectorvalued function in d-dimensions (resp. d-dimensional vector), ϕ(x) (resp. x) say, we write the ith−component (i ∈ {1, . . . , d}) as ϕ(x) (i) (resp. x (i) ). For a d × q matrix x, we write the (i, j) th −entry as x (ij) . For μ ∈ P(X) and X a random variable on X with distribution associated with μ we use the notation X ∼ μ(·).

(b) Diffusion process
Let θ ∈ Θ ⊆ R d θ be fixed and we consider a diffusion process on the probability space Brownian motion. We assume that for any fixed θ ∈ Θ, a (i) Furthermore, we make the following additional assumption, termed (D1).
(ii) Globally Lipschitz: for any θ ∈ Θ, there exists a positive constant C < ∞ such that one can write more succinctly (d) Stochastic model which are assumed to have the following joint conditional Lebesgue density Note that the framework to be investigated in this article is not restricted to this special case, but we shall focus on it for the rest of the paper. So to clarify ϕ θ (x t 1 , . . . , x t n ) = n p=1 g θ (y t p |x t p ) from herein.
In reference to (2.3) and (2.4), we have that

(e) Time-discretization
From herein, we take the simplification that t p = p, p ∈ {1, . . . , n}, T = n. Let l ∈ N 0 be a given level of discretization, and consider the Euler discretization of step size then if one samples L from P L independently of the sequence (Ξ l ) l∈N 0 then by e.g. ( [17], Theorem 5) the estimate is an unbiased and finite variance estimator of G (i) θ . Before describing in fuller detail our approach, which requires numerous techniques and methodologies, we first present our main result which is an unbiased theorem related to our estimator of the Hessian. This is given through the following proposition.
Proof. This is the same as ( [22], theorem 2), except one must repeat the arguments of that paper given lemma A.3 in the appendix and given the rate in the proof of proposition 2.1. Since the arguments and calculations are almost identical, they are omitted in their entirety.
The main point is that the choice of P L is as in [22], which is: in the case that σ is constant P L (l) ∝ l (l + 1) log 2 (2 + l) 2 and in the non-constant case P L (l) ∝ 1/2 l (l + 1) log 2 (2 + l) 2 ; both choices achieve finite variance and costs to achieve an error of O( ) with high probability as in ( [16], propositions 4 and 5).
The main issue is to construct the sequence of independent random variables (Ξ l θ ) l∈N 0 such that (3.1) and (3.2) hold and that the expected computational cost for doing so is not unreasonable as a functional of l : a method for doing this is in [22] as we will now describe.
The computation of Ξ 0 θ is performed by using exactly the coupled conditional particle filter (CCPF) that has been introduced in [30]. This is an algorithm which allows one to construct a random variable π 0 θ (G 0 θ ) such that E θ [ π 0 θ (G 0 θ )] = π 0 θ (G 0 θ ) and we will set Ξ 0 θ = π 0 θ (G 0 θ ). We begin by introducing the Markov kernel C l : Θ × X l → P(X l ) in algorithm 1. To that end, we will use the notation x i,l l :k ∈ (R d ) k −1 l , where l ∈ N 0 is the level of discretization, i ∈ {1, . . . , N} is a particle (sample) indicator, k ∈ {1, . . . , n} is a time parameter and x i,l l :k = (x i,l l , x i,l 2 l , . . . , x i,l k ). The kernel described in algorithm 1 is called the called the CPF, as developed in [27], and allows one to generate, under minor conditions, an ergodic Markov chain of invariant measure π l θ . By itself, If k=n go to 4. 3. Resampling: Construct the probability mass function on {1, . . . , N}: .
Sample i ∈ {1, . . . , N} using this mass function and return x i l :n .
it does not provide unbiased estimates of expectations w.r.t. π l θ , unless π l θ is the initial distribution of the Markov chain. However, the kernel will be of use in our subsequent discussion.
Our approach generates a Markov chain {Z m } m∈N 0 on the space Z 0 := X 0 × X 0 , Z m ∈ Z 0 . In order to describe how one can simulate this Markov chain, we introduce several objects which will be needed. The first of these is the kernelp l : Θ × R 2d → P(R 2d ), which we need in the case l = 0 and its simulation is described in algorithm 2. The Markov kernel is used to simulate intermediate points from x k−1 to the next observations at time k, with time step l . We will also need to simulate the maximal coupling of two probability mass functions on {1, . . . , N}, for some N ∈ N, and this is described in algorithm 3.

Remark 3.2.
Step 4 of algorithm 3 can be modified to the case where one generates the pair (i, j) ∈ {1, . . . , N} 2 from any coupling of the two probability mass functions (r 4 , r 5 ). In our simulations in §4, we will do this by sampling by inversion from (r 4 , r 5 ), using the same uniform random variable. However, to simplify the mathematical analysis that we will give in the appendix, we consider exactly algorithm 3 in our calculations.
To describe the CCPF kernel, we must first introduce a driving CCPF, which is presented in algorithm 4. The driving CCPF is nothing more than an ordinary coupled particle filter, except the final pair of trajectories is 'frozen' as is given in the algorithm (that is (x 1:n ,x 1:n ) as in step 1 of algorithm 4) and allowed to interact with the rest of the particle system. Given the ingredients in algorithms 2-4, we are now in a position to describe the CCPF kernel, which is a Markov kernel K 0 : Θ × Z 0 → P(Z 0 ), whose simulation is presented in algorithm 5. We will consider the Markov chain {Z m } m∈N 0 , Z m = (X 1:n (m),X 1:n (m)), generated by the CCPF kernel in algorithm 5 and with initial distribution where We remark that in algorithm 5, marginally, x i 1:n (resp.x j 1:n ) has been generated according to C 0 θ (x 1:n , ·) (resp. C 0 θ (x 1:n , ·)). A rather important point is that if the two input trajectories in step 1 of algorithm 5 are equal, i.e. x 1:n =x 1:n , then the output trajectories will also be equal. To that end, Then, setting m * ∈ {2, 3, . . .} one has the following estimator: {G 0 θ (X 1:n (m)) − G 0 θ (X 1:n (m))}, (3.5) and one sets Ξ 0 θ = π 0 θ (G 0 θ ). The procedure for computing Ξ 0 θ is summarized in algorithm 6.
. . , N} according to the probability mass function: and set j=i. 4. Otherwise generate i ∈ {1, . . . , N} and j ∈ {1, . . . , N} independently according to the probability mass functions and (c) Computing (Ξ l θ ) l∈N We are now concerned with the task of computing (Ξ l θ ) l∈N such that (3.1)-(3.2) are satisfied. Throughout the section l ∈ N is fixed. We will generate a Markov chain {Ž l m } m∈N 0 on the space In order to construct our Markov chain kernel, as in the previous section, we will need to provide some algorithms. We begin with the Markov kerneľ l−1 2d ) which will be needed and whose simulation is described in algorithm 7. We will also need to sample a coupling for four probability mass functions on {1, . . . , N} and this is presented in algorithm 8. To continue onwards, we will consider a generalization of that in algorithm 4. The driving CCPF at level l is described in algorithm 10. Now given algorithms 7-10, we are in a position to give our Markov kernel,Ǩ l : Θ × Z l × Z l−1 → P(Z l × Z l−1 ), which we shall call the coupled-CCPF (C-CCPF) and it is given in algorithm 11. To assist the subsequent discussion, we will introduce the marginal Markov kerneľ Given this kernel, one can describe the CCPF at two different levels l, l − 1 in algorithm 9. Algorithm 9 details a Markov kernelČ l : Θ × X l × X l−1 → P(X l × X l−1 ) which we will use in the initialization of our Markov chain to be described below.
We will consider the Markov chain {Ž l m } m∈N 0 , witȟ Z l m = (X l l :n (m),X l l :n (m)), (X l−1 l−1 :n (m),X l−1 l−1 :n (m)) , generated by the C-CCPF kernel in algorithm 11 and with initial distributioň x . An important point, as in the case of algorithm 5, is that if the two input trajectories in step 1 of algorithm 11 are equal, i.e. x l l :n =x l l :n , or x l−1 l−1 :n =x l−1 l−1 :n , then the associated output trajectories will also be equal. As before, we define the stopping times associated with the given Markov chain ( Then our estimator is for each (3.9) The algorithm and the various settings are described and investigated in detail in [22] as well as enhanced estimators. We do not discuss the methodology further in this work. Remark 3.3. As we will see in the succeeding section, we will compare our methodology which is based on the C-CCPF to that of another methodology, which is the PF, within particle Markov chain Monte Carlo. Specifically, it will be a particle marginal Metropolis Hastings algorithm. We omit such a description of the latter, as we only use it as a comparison, but we refer the reader to [18] for a more concrete description. However, we emphasize that it is only asymptotically unbiased, in relation to the Hessian identity (2.4).

Remark 3.4.
It is important to emphasize that with inverse Hessian, which is required for Newton methodologies, we can debias both the C-CCPF and the PF. This can be achieved by using the same techniques which are presented in the work of Jasra et al. [21].

Numerical experiments
In this section, we demonstrate that our estimate of the Hessian is unbiased through various different experiments. We consider testing this through the study of the variance and bias of the mean square error, while also providing plots related to the Newton-type learning. Our results will be demonstrated on three underlying diffusion processes: a univariate OU process, a multivariate OU process and the FHN model. We compare our methodology to that using the PF instead of the coupled-CCPF within our unbiased estimator.

(a) Ornstein-Uhlenbeck process
Our first set of numerical experiments will be conducted on a univariate OU process, which takes the form dX t = −θ 1 X t dt + σ dW t and X 0 = x 0 , where x 0 ∈ R + is our initial condition, θ 1 ∈ R is a parameter of interest and σ ∈ R is the diffusion coefficient. For our discrete observations, we assume that we have Gaussian measurement errors, Y t |X t ∼ g θ (·|X t ) = N (X t , θ 2 ) for t ∈ {1, . . . , T} and for some θ 2 ∈ R + . Our observations will be generated with parameter choices defined as θ = (θ 1 , θ 2 ) = (0.46, 0.38), x 0 = 1 and T = 500. Throughout the simulation, one observation sequence {Y 1 , Y 2 , . . . , Y T } is used. The true distribution of observations can be computed analytically, therefore the Hessian is known. In figure 2, we present the surface plots comparing the true Hessian with the estimated Hessian, obtained by the Rhee & Glynn estimator (3.9) truncated at discretization level L = 8, this is done by letting P L (l) ∝ l I(l ≤ L). We use M = 10 4 to obtain the estimate Hessian surface plot. Both surface plots are evaluated at θ 1 , θ 2 ∈ {0.2, 0.3, 0.4, . . . , 1.0}. In figure 3, we test out the convergence of bias of the Hessian estimate (3.9) with respect to its truncated discretization level. This essentially tests the result in lemma A.2. We use L = {2, 3, 4, 5, 6, 7} and plot the bias against L . The bias is obtained by using M = 10 4 i.i.d. samples, and taking its entry-wise difference with the true Hessian entry-wise value. Note that the Hessian estimate here is evaluated with true parameter choice. As the parameter θ is two-dimensional, we present four log-log plots where the rate represents the fitted slope of log-scaled bias against log-scaled L . We observe that the Hessian estimate bias is of order α L where α ∈ {0.9629, 0.7536, 0.7361, 0.8949} respectively for the four entries, which verifies our result in lemma A.2. We also compare the wall-clock time cost of obtaining one realization of Hessian estimate (3.9) with the cost of obtaining one realization of score estimate (see [22]), both truncated at the same discretization levels L = {2, 3, 4, 5, 6, 7}, here M = 100. The comparison result is provided in figure 4a.
We observe that the cost of obtaining the Hessian estimate is on average three times more expensive than obtaining a score estimate. The reason for this is that we need to simulate three CCPF paths in order to obtain one summand in the Hessian estimate, while to estimate the score function, we need only one path. We also record the fitted slope of log-scaled cost against log-scaled L for both estimates, the cost for Hessian estimates is roughly proportional To verify the rate obtained in lemma A.3, we compare the variance of the Hessian incremental estimate with respect to discretization level L ∈ {1, 2, 3, 4, 5, 6}. The incremental variance is approximated with the sample variance over 10 3 repetitions, and we sum over all 2 × 2 entries and present the log-log plot of the summed variance against L on the right of figure 4. We observe that the incremental variance is of order 1.15 L for the OU process model. This verifies the result obtained in lemma A.3. It is known that when truncated, the Rhee & Glynn method essentially serves as a variance reduction method. As a result, compared to the discrete Hessian estimate (2.6), the truncated Hessian estimate (3.9) will require less cost to achieve the same MSE target.
We present on figure 5a, the log-log plot of cost against MSE for discrete Hessian estimate (2.6) and the Rhee & Glynn (R&G) Hessian estimate (3.9). We observe that (2.6) requires much lower cost for an MSE target compared to (3.9). For (2.6), the cost is proportional to O( −2.974 ) for an MSE target of order O( 2 ). While for (3.9), the cost is proportional to O( −2.428 ). The average cost ratio between (3.9) and (2.6) under the same MSE target is 5.605. In figure 5b, we present the log-log plot of cost against MSE for (3.9) and the Hessian estimate obtained by the PF method. We observe that under similar MSE target, the latter method on average has cost 5.054 times less than (3.9). In figure 6, we present the convergence plots for the stochastic gradient descent (SGD) method with score estimate and Newton method with score & Hessian estimate. For both methods, the parameter is initialized at (0.1, 0.1), and the learning rate for the SGD method is set to 0.002. Our conclusion from figure 5a is that firstly the methodology using R&G has a better  rate, relating the MSE to computational cost, which favours our methodology. For figure 5b, we note that the methodology exploiting the PF and our methodology, i.e. 'R&G Hessian', result in the same rate. As we know the former is unbiased, this comparison indicates our methodology is also unbiased, despite being more expensive by 5.054 times. For figure 6, we observe as expected that the Newton method requires fewer iterations (five compared to 132 which uses SGD) to reach the true parameters of interest, i.e. θ 1 and θ 2 .

(b) Multivariate Ornstein-Uhlenbeck process model
Our second model of interest is a two-dimensional OU process defined as where x 0 ∈ R 2 is the initial condition and (σ 1 , σ 2 ) ∈ R + × R + are the diffusion coefficients. We assume Gaussian measurement errors, Y t |X t ∼ g θ (·|X t ) = N 2 (X t , θ 4 I 2 ), where I 2 is a twodimensional identity matrix. We generate one sequence of observations up to time T = 500 with parameter choice θ = (θ 1 , θ 2 , θ 3 , θ 4 ) = (0.48, 0.78, 0.37, 0.32), σ 1 = 0.8, σ 2 = 0.6, x 0 = (1, 1) T . As before, we study various properties of (3.9) with the true parameter choice. In figure 7, we present the log-log plot of bias against L for (3.9), where the five points are evaluated with L ∈ {2, 3, 4, 5, 6}. The bias is approximated by the difference between (3.9) and the true Hessian with M = 10 4 , we sum over all entry-wise bias and present it on the plot. We observe that the summed bias is of order 0.972 L . This verifies the result in lemma A.2. In figure 8a, we present a log-log plot of cost against L for (3.9) and the R&G score estimate both with M = 10. We observe that the cost of (3.9) is proportional to −0.866 L . This rate is similar to that of the score estimate, on average the cost ratio between (3.9) and the score estimate is 3.495.
In figure 8, we present on the left the log-log plot of summed incremental variance of Hessian estimate against L . We compute the entry-wise sample variance of the incremental Hessian estimate for 10 3 times, and plot the summed variance against L . We observe that the Hessian incremental variance is proportional to 1  verifies the variance reduction effect of truncated R&G scheme. In figure 9b, we present the loglog plot of the cost against MSE for (3.9) and Hessian estimate using PF. We observe that under a similar MSE target, the latter method on average costs 3.165 times less than that of (3.9). In figure 10, we present the convergence plots for the SGD and Newton methods. Both the score estimate and the Hessian estimate (3.9) are obtained with M = 2 × 10 3 , truncated at level L = 8. The learning rate for the SGD is set to 0.005. The training reaches convergence when the relative Euclidean distance between trained and true θ is no bigger than 0.02. We initialize the training parameter at (0.  iterations, compared to four iterations of the modified Newton method. The actual training time until convergence for the Newton method is roughly 7.6 times faster than the SGD method.

(c) FitzHugh-Nagumo model
Our next model will be a two-dimensional ordinary differential equation, which arises in neuroscience, known as the FHN model [7,31]. It is concerned with the membrane potential of a neuron and a (latent) recovery variable modelling the ion channel kinetics. We consider a stochastic perturbed extended version, given as ⎡ ⎣ dX For the discrete observations, we assume Gaussian measurement errors, , where (θ 1 , θ 2 , θ 3 , θ 4 ) ∈ R + × R × R × R + , (σ 1 , σ 2 ) ∈ R + × R + are the diffusion coefficients and, as before, {W t } t≥0 is a Brownian motion. We generate one observation sequence with parameter choices θ = (θ 1 , θ 2 , θ 3 , θ 4 ) = (0.89, 0.98, 0.5, 0.79), σ = (0.2, 0.4). As the true distribution of the observation is not available analytically, we use L = 10 to simulate out In figure 11, we compared the bias of (3.9), truncated at discretization level L ∈ {2, 3, 4, 5, 6, 7} and plot it against L (log-log plot). The summed bias is obtained by taking element-wise difference between an average of 10 3 i.i.d. realizations of the Hessian estimate and the true Hessian, then summed over all the element-wise difference. The true Hessian is approximated by . The average cost ratio between (3.9) and the score estimate is 3.495. In figure 12b, we present the log-log plot of the summed incremental  In figure 13a, we again present a log-log plot of the cost against the summed MSE of (3.9) over all entries for both (3.9) and (2.6). We observe that under an MSE target of 2 , (3.9) requires cost of order O( −2.482 ), while (2.6) requires cost of order O( −2.97 ). The average cost ratio between (3.9) and (2.6) under the same MSE target is 3.987. This verifies the variance reduction effect of truncated R&G scheme. In figure 13b is the log-log plot of cost against summed MSE for (3.9) and Hessian estimate using the PF. We observe that under similar MSE target, the latter method on average costs 4.627 times less than that of (3.9). In figure 14, we present the convergence plots of SGD and the modified Newton method. For the modified Newton method, we set all the offdiagonal entries to zero for the Hessian estimate, and add 0.0001 to the diagonal entries to avoid singularity. When the L 2 norm of the score is smaller than 0.1, we scale the searching step by a learning rate of 0.002. Both the score estimate and the Hessian estimate (3.9) are obtained with M = 2 × 10 3 , truncated at level L = 8. The learning rate for the SGD is set to 0.001. The training reaches convergence when the relative Euclidean distance between trained and true θ is no bigger than 0.02. We initialize the training parameter at (0.8, 0.8, 0.8, 0.8), and observe that the SGD method reaches convergence with fewer iterations than the stochastic Newton method.

Summary
In this work, we were interested in developing an unbiased estimator of the Hessian, related to PODPs. This task is of interest, as computing the Hessian is primarily biased, due to its computational cost, but also it has improved convergence over the score function. We presented a general expression for the Hessian and proved, in the limit of discretization level, that it is consistent with the continuous form. We demonstrated that we were able to reduce the bias, arising from the discretization. This was shown through various numerical experiments that were tested on a range of diffusion processes. This not only highlighted the reduction in bias, but also that convergence is better compared to computing and using the score function. In terms of research directions beyond what we have done, it would be nice firstly to extend this to more complicated diffusion models, such as ones arising in mathematical finance [32,33]. Such diffusion models would be rough volatility models. Another potential direction would be to consider diffusion bridges, and analyse how one can could use the tools here and adapt them. This has been of interest, with recent works such as [34,35]. One could also aim to derive similar results for unbiased estimation using alternative discretization schemes, such as the Milstein scheme. This should result in different rates of convergence, however the analysis would be different. To do so, one would also require such a newly developed analysis for the score function [22]. Finally, one could aim to apply this to other applications, which are Monte Carlo methods being exploited such as phylogenetics. In particular, as we have tested our methodology on various OU processes, one could test this further on an SDE arising in phylogenetics, presented ad discussed in [36,37]. We require the following additional assumption called (D2) and all derivatives are assumed to be well defined.
-For any (θ , y) The following result is proved in [22] and is lemma 1 of that article.