Delayed Acceptance ABC-SMC

Abstract Approximate Bayesian computation (ABC) is now an established technique for statistical inference used in cases where the likelihood function is computationally expensive or not available. It relies on the use of a model that is specified in the form of a simulator, and approximates the likelihood at a parameter value θ by simulating auxiliary data sets x and evaluating the distance of x from the true data y. However, ABC is not computationally feasible in cases where using the simulator for each θ is very expensive. This article investigates this situation in cases where a cheap, but approximate, simulator is available. The approach is to employ delayed acceptance Markov chain Monte Carlo within an ABC sequential Monte Carlo sampler to, in a first stage of the kernel, use the cheap simulator to rule out parts of the parameter space that are not worth exploring, so that the “true” simulator is only run (in the second stage of the kernel) where there is a reasonable chance of accepting proposed values of θ. We show that this approach can be used quite automatically, with few tuning parameters. Applications to stochastic differential equation models and latent doubly intractable distributions are presented. Supplementary materials for this article are available online.


Motivation
Approximate Bayesian computation (ABC) is a technique for approximate Bayesian inference originally introduced in the population genetics literature (Pritchard et al., 1999;Beaumont et al., 2002), but which is now used for a wide range of applications. It is suitable for situations in which a model l (· | θ) with parameters θ for data y is readily available in the form of a simulator, but where l as a function of θ (known as the likelihood) is intractable in that it cannot be evaluated pointwise. If θ is assigned prior distribution p, the object of inference is the posterior distribution π (θ | y) ∝ p (θ) l (y | θ). ABC yields approximations to the posterior distribution through approximating the likelihood, in the simplest case using where P is a kernel centred around x (m) and M points x (m) M m=1 are sampled from l (· | θ). We may see Monte Carlo ABC algorithms as sampling from a joint distribution on θ and x proportional to p (θ) l (x | θ) P (y | x), where l is used as a proposal distribution for x.
In this paper we describe methods that are designed to be applicable in cases where the likelihood estimation is computationally costly because the model l is expensive to simulate from, for example when studying an ordinary differential equation (ODE) or stochastic differential equation (SDE) model (Picchini, 2014) that requires a solver with a small step size, or when using an individual based model (van der Vaart et al., 2015) with a large number of individuals. This situation is not uncommon, since even if the model takes only a matter of seconds to simulate, this cost is prohibitive when it needs to be simulated a number of times due to estimating the likelihood at a large number of Monte Carlo points in θ-space. We note that this situation is exacerbated when θ is of moderate to high dimension, since in these cases (as in any Monte Carlo method) it is difficult to design a proposal that can efficiently make moves on all dimensions of θ simultaneously. Most commonly, each dimension of θ is updated using a single component move, and in the ABC context this means using M simulations from the likelihood when updating each dimension of θ. 1

Previous work and contributions
This paper describes an ABC-SMC algorithm that uses a delayed-acceptance ABC-MCMC move that aims to limit the number of θ points at which we need to perform an expensive simulation of the likelihood, whilst also attempting to maintain a good exploration of the target distribution. We begin in section 2 with a review of the literature on ABC-SMC, focussing particularly on the case where a uniform ABC kernel is used. Then in section 3.1.1 we describe delayed-acceptance ABC-MCMC for exploring θ-space, and use it as a method for discarding θ points that are unlikely to be in regions of high posterior density. Delayed acceptance (DA) decomposes an MCMC move into two stages. At the first stage a point is proposed and is accepted or rejected according to a posterior that is usually chosen to approximate the desired posterior. If accepted at the first stage, at the second stage an acceptance probability is used that "corrects" for the discrepancy between the approximate and the desired target. Thus we may think of the first stage as screening for points to take through to the second stage. Compared to the previous work, this approach is most similar to the "early rejection" of Picchini and Forman (2016) who use the prior for this screening step. Delayed-acceptance ABC-MCMC at the first stage makes use of an approximate, but computationally cheap, alternative to the full simulator. We will see that our approach is applicable in cases in which there is a cheap, approximate simulator is used independently of the full simulator, and also where the full simulator is a continuation of the initial cheap simulation. This latter situation is the same as that considered in "Lazy" ABC (Prangle, 2016), in which is it possible to monitor a simulation as it is progressing, and to be able to assess before the full simulation whether the current θ point is likely to be in a region of high posterior density. We will see that using delayed-acceptance ABC-MCMC introduces an additional tuning parameter: the tolerance used for the cheap simulator. When using delayed-acceptance ABC-MCMC directly, it is not always straightforward to set this parameter. In section 3.2.1 we embed delayed-acceptance ABC-MCMC in the ABC-SMC framework, since this is established as an effective method for guiding Monte Carlo points in θ-space to regions of high posterior density, and also since we see that it allows for automating the choice of the additional tuning parameter. The resultant algorithm is particularly useful for distributions for which it is difficult to design a useful proposal distribution. When our proposal has a poor acceptance rate, we may need to run the expensive simulator many times for a single acceptance. However, with delayed acceptance we may propose a very large number of points without the need to run the expensive simulator for a large number of them. The procedure is to evaluate whether the proposed points are in a high density region according to the cheap simulator -in the cases where they are, we run the expensive simulator. For points that lie in this region (i.e. that pass the first stage of DA), we must run the expensive simulator, and expect to accept a reasonable proportion of these (as long as the cheap simulator is close to the expensive simulator). Essentially, the first stage of the DA acts to give us a well-targeted proposal. Section 4 considers an application to SDEs, where we examine using cheap simulators with a large step size in the numerical solver. Section 5 considers applications to doubly intractable distributions: applying ABC to these models (and other Markov random fields) is complicated because no computationally feasible exact simulator is available. Instead an MCMC algorithm is commonly used as an approximate approach (Grelaud et al., 2009;Everitt, 2012). In such approaches the burn in of this MCMC method determines the accuracy of the approach: for a small burn in the method is biased, with this bias being eliminated by a large burn in which may be computationally expensive. In the main paper we examine the case of latent Ising models previously studied in Everitt (2012), noting the potential advantage of ABC in these cases compared to methods such as the exchange algorithm. In the appendix, we also examine an application to a latent exponential random graph model (ERGM). We then conclude with a discussion in section 6.

SMC samplers
An SMC sampler (Del Moral et al., 2006) is an importance sampling based Monte Carlo method for generating weighted points from a target distribution π through making use of a sequence of T target distributions that bridge between some initial proposal distribution π 0 from which we can simulate, and the final target π T = π. Here we focus on the variant of the algorithm that uses an MCMC kernel, and provide a sketch of the main steps of the algorithm (further details may be found in Del Moral et al. (2006)). The algorithm begins by simulating N weighted "particles" from π 0 (initially with equal weights). Then the particles are moved from target π t to π t+1 through an importance sampling reweighting step, followed by a "move" step in which each particle is simulated from an MCMC kernel starting at its current value, and with target π t+1 . Let θ (i) t be the value taken by the i-th particle after iteration t and w (i) t be its (normalised) weight. The reweighting step then computes the (unnormalised) weight w (i) t+1 of the n-th particle θ followed by a normalisation step to find w (i) t+1 . This algorithm yields an empirical approximation of π t where δ θ is a Dirac mass at θ, and an estimate of its normalising constant Usually the variance of the weights is monitored during the algorithm, and if this becomes too large (which would lead to high variance estimators if no intervention was made) a resampling step is performed after reweighting (in this paper stratified resampling is used). The resampling step at iteration t + 1 simulates N particles from the empirical distributionπ N t+1 . SMC outperforms importance sampling in cases where one needs many points to obtain low variance importance sampling estimates when directly using π 0 as a proposal for π, i.e. when the distance between π 0 and π is too large. In order to ensure that SMC estimates have low variance, we require the distance between π t and π t+1 to be small for all t. a proxy for the distance between subsequent targets that may be estimated as the algorithm runs is given by the effective sample size (ESS) (Kong et al., 1994) A large ESS corresponds to a small distance between targets although, as referred to in the following section, a high ESS is necessary but not sufficient for achieving a low Monte Carlo variance. Sisson et al. (2007Sisson et al. ( , 2009) remark that the ABC posterior π is a natural candidate for simulating from using SMC, in that a sequence of posteriors with a decreasing sequence of tolerances 1 to T = is a natural and useful choice as a sequence of distributions to use in SMC. We follow Del Moral et al. (2012a) in choosing the sequence of distributions

ABC-SMC
such that θ t ∼ π t , with x t being the corresponding auxiliary variable (as in equation (1)) at iteration t. When an ABC-MCMC move is used for the "move" step, this leads to the following weight update when moving from target π t to π t+1w (a derivation may be found in the appendix). This weight update is computationally cheap and requires no simulation to produce x (i) t , which has been generated either in the initial step of the algorithm or as a part of a previous MCMC move. Additionally, it is cheap to compute for any choice of t+1 . This fact is used by Del Moral et al. (2012a) in order to devise an adaptive ABC-SMC algorithm that automatically determines the sequence ( t ). At every SMC iteration a bisection method is used to determine the t+1 that will result in an ESS that is some proportion (e.g. 90%) of N .
It is common practice in SMC algorithms to use the current set of particles to adaptively set the proposal for the MCMC kernels used in the "move" step (note that this introduces a small bias into estimates based on the SMC for the methods used in this paper). The simplest scheme, from Robert et al. (2011), sets the proposal covariance to be some multiple of the empirical covariance of the particles (this choice being rooted in results on optimal proposals for some MCMC algorithms).

ABC-SMC with indicator potentials
In ABC a common choice for the kernel P t is P t (y | x t ) ∝ I (d (y, x t ) < t ), where d is a distance metric. This kernel is known to be sub-optimal (Li and Fearnhead, 2018), but we restrict our attention to this case for the remainder of the paper since it simplifies the presentation and interpretation of the algorithms we introduce. The first example of this simplification is that when used in the ABC-SMC method of Del Moral et al. (2012a), the particle weights are either zero ("dead") or non-zero ("alive") (this following from equation (7)). This type of SMC algorithm is also studied in Cérou et al. (2012);Del Moral et al. (2015); Prangle et al. (2017). A further consequence that the ESS is equal to the number of "alive" particles with non-zero weight after the update, simplifying the interpretation of the adaptive approach to choosing t+1 described above.
In this paper we use the revised scheme of Bernton et al. (2017). The revision addresses the issue that the acceptance rate of ABC-MCMC moves decreases as the ABC-SMC progresses (since the ABC tolerance decreases). When the MCMC moves have a poor acceptance rate, the ESS is not a good criterion for deciding on a new tolerance, since it is unaffected by the values taken by each particle: even if all particles take the same value, the ESS may be high. Therefore Bernton et al. (2017) suggest instead to choose t+1 such that some number U (with 0 < U ≤ N ) of the N particles will be unique after resampling has been performed. Both this new scheme, and the original method that uses the ESS, introduce a small bias into estimates from the SMC (Cérou et al., 2012;Prangle et al., 2017).
To simplify the presentation of our method, we perform resampling at every step of the algorithm. As a consequence, the denominator in equation (7) is positive and the same for every particle, thus in practice the reweighting step becomesw where for simplicity we have omitted the additional normalising term for I (d (y, x t ) < t ), required in order to provide a correct marginal likelihood estimator -see Didelot et al. (2011) for details. Resampling at every step avoids propagating "dead" particles with zero weight, but is not an optimal strategy since it is possible that not all alive particles will be part of the resample (unless systematic resampling is used). A standard approach to deciding when to resample is to only do so when the ESS fall below a pre-specified threshold (Del Moral et al., 2012b), but we do not consider such approaches in this paper.
The ABC-SMC algorithm with the configuration described in this section is given in algorithm 1. Note that although a final tolerance end is specified in this algorithm, in practice the method is run for as long as computational resources will allow; a standard approach would be to monitor the acceptance probability of the MCMC moves and to terminate when this is zero for a number of iterations. The early rejection method of Picchini and Forman (2016) may be employed for the ABC-MCMC moves when using an indicator kernel: this approach is precisely the same as standard ABC-MCMC, with the calculation reorganised in order to avoid unnecessary simulations from the likelihood (see appendix for details).
Algorithm 1 ABC-SMC using early rejection.
Inputs: Number of particles N , desired number of unique particles U , prior p, simulator l, final tolerance end .
and weights w for all t.
Simulate v ∼ U [0, 1] N , to be used in resampling. Use bisection to choose t+1 s.t. there will be U unique particles after reweighting and resampling (using random numbers v). for Perform resampling using random draws v.

Delayed acceptance ABC-SMC
This section describes the algorithm that is introduced by this paper: an ABC-SMC sampler that uses a DA ABC-MCMC kernel as its "move" step (instead of standard ABC-MCMC), with the DA move being tuned automatically using the population of particles. We begin by describing delayed acceptance, then outline how it may be used in the ABC context, before describing an ABC-SMC sampler that makes use of it. Where they are omitted from the main paper, full derivations are given in the appendix.

Delayed acceptance
This section introduces delayed acceptance (Christen and Fox, 2005) as the algorithm that results when one Metropolis-Hastings kernel is used as a proposal within another. We note here the link with pseudo-marginal type algorithms that, as a special case, show how to use importance sampling or SMC within a Metropolis-Hastings algorithm. Let Metropolis-Hastings transition kernel K 1 have invariant distribution π 1 and suppose our aim is to use the distribution π 1 as the proposal distribution in another Metropolis-Hastings kernel K 2 with an invariant distribution π 2 . It turns out that we can use a draw from K 1 as if it was a point drawn from π 1 . That is, we apply K 1 to θ to obtain θ * , then accept θ * with probability α 2 (θ, θ * ) = min 1, π 2 (θ * ) π 1 (θ) π 2 (θ) π 1 (θ * ) .
A computational saving arises when a rejection occurs under K 1 , since in this case θ * = θ, giving α 2 = 1 without needing to evaluate π 2 . Verifying this acceptance probability is simple. Since θ * is simulated from K 1 , we have an acceptance probability for K 2 of α 2 (θ, θ * ) = min 1, Now, K 1 satisfies the detailed balance equation with respect to π 1 and substituting this into (9) we obtain the desired result of equation (8).
In the literature on delayed acceptance, π 2 is chosen to be a desired target distribution that may be computationally expensive to evaluate, and π 1 is chosen to be a cheaper, approximate target distribution. Used in this way, the first stage of delayed acceptance (i.e. the result of applying K 1 ) provides a proposal that should be well suited for an MH algorithm with a target π 2 , e.g. Banterle et al. (2014) uses approximate likelihoods based on subsets of the data as the approximate targets. Strens (2004) uses a very similar idea, i.e. a hierarchical decomposition of the likelihood. Banterle et al. (2014) note that whilst a standard MH algorithm dominates delayed acceptance in terms of the Peskun ordering, it is possible that a reduced computational cost in the first stage of delayed acceptance can lead to more efficient algorithms in terms of the Monte Carlo error per computational time.

Delayed acceptance with ABC-MCMC
This paper makes use of delayed acceptance in the ABC setting. Suppose that there exists a computationally cheap alternative l 1 to our "true" simulator l 2 = l. We wish to perform the first stage of delayed acceptance using points x * 1 simulated from l 1 , which are compared to data y 1 (which need not necessarily be the same as y) to determine the parameters at which we simulate points x * 2 from l 2 , from which we estimate the standard ABC likelihood. At the first stage of delayed acceptance, we use an ABC-MCMC move with the simulator l 1 , using a tolerance 1 , giving an acceptance probability of The acceptance probability at the second stage is where the marginal distribution of the target we have used is the ABC posterior with l 2 , 2 and y. Note that we have included the conditioning on x 1 in P 2 and l 2 , which allows for the possibility that the "true" simulator is a composition of the simulators l 1 and l 2 , with x 1 ∼ l 1 being a partial simulation. If the true simulator is simply l 2 , we may drop this conditioning on x 1 from P 2 and l 2 .

Use of DA-ABC-MCMC in ABC-SMC
We now examine the use of the DA-ABC-MCMC approach from the preceding section for the "move" step in the ABC-SMC algorithm of Del Moral et al. (2012a). The new SMC sampler operates on a sequence of target distributions where 2 decreases at each iteration, thus we use 2,t to denote its value at the t-th iteration. 1 and y 1 may also change between SMC iterations, and we denote their values at the t-th iteration by 1,t and y 1,t respectively. In this case the weight update for each particle is given bỹ 6 3.2.2 Adaptive DA within ABC-SMC when using the indicator kernel In this section we consider the implementation of the DA-ABC-MCMC move when P 1,t is chosen to be an indicator function; i.e. P 1,t y 1,t | x * 1,t ∝ I d y 1,t , x * 1,t <= 1,t , where d is a distance metric. Following section 3.1.1 we obtain, at the t-th iteration of the SMC, an acceptance probabilities of and at the first and second stages of DA respectively. The dependence in equation (23) on the condition d (y 1,t , x 1,t ) is maybe surprising, but is required since if it is not satisfied the denominator in the acceptance ratio is zero, and in this case the proposal should be rejected (Tierney, 1998). The presence of this condition is due to the possibility that the sequence 1,t is not always decreasing in t. Full details may be found in the appendix, along with details of how early rejection may be used in the first stage of DA.
Del Moral et al. (2012a) make use of a useful property of ABC to create an adaptive algorithm: once simulation of x has been performed for any θ, it is computationally cheap to estimate the ABC likelihood for any tolerance . Here we use this property, together with the fact that we have a population of particles available in the SMC algorithm, to automatically determine an appropriate value for 1,t at every iteration. We choose 1,t using the criterion that we desire to perform the second stage of the delayed acceptance for a fixed proportion of the particles; we choose 1,t such that A particles are accepted to move forward to the second stage, where 0 < A ≤ N is chosen prior to running the SMC. This is achieved by choosing 1,t such that A particles satisfy d (y 1,t , x 1,t ) , d y 1,t , x * 1,t < 1,t : a bisection method may be used to find such an 1,t . As with the bisection used in the adaptive approach of Del Moral et al. (2012a), in practice the bisection will not always give precisely A particles at the second stage of DA; if fewer than A particles pass early rejection due to the prior, 1,t is chosen to let all these particles through to the second stage.

Discussion
This ABC-SMC algorithm, which we call DA-ABC-SMC, is described in full in algorithm 2. In brief, we use this algorithm where we: adaptively choose the sequence ( 2,t ) such that there are U unique particles after reweighting and resampling (Bernton et al., 2017); adaptively choose the variance of the (Gaussian) MCMC proposals to be the sample variance of the current particle set. The method requires us to specify: the number of particles N , the number of particles 0 < A ≤ N that are allowed past the first stage of the DA and the number of unique particles 0 < U ≤ N . We now discuss how to choose each of these parameters.
• Number of unique particles. On terminating the algorithm, we effectively have a sample of size U from the final ABC posterior, thus U should be chosen to be the number of Monte Carlo points we wish to generate.
• Number of particles allowed past the first stage. A dictates the computational cost of each iteration, since it is the number of expensive simulations we will perform per SMC iteration.
• Total number of particles. To choose N we need to consider the factor F by which the cheap simulator is faster than the expensive simulator. For DA to be most effective, we require that the cost of running the cheap simulator is small compared to running the expensive simulator, i.e. N/F A.
If N is chosen to be much larger than A, we require a slightly non-standard initial step in our SMC algorithm so that the computational expense of this step does not dominate the subsequent iterations. In a standard SMC sampler, the initial step would require initial values of x 1 and x 2 to be simulated for each of the N particles, which requires running the expensive simulator N times. Instead we simulate (θ, x 1 , x 2 ) A times from the initial distribution, and repeat these points N/A times (i.e. the vector of particles consists of stacked copies of these values) so that we have a sample of size N from the initial distribution. This sample contains only A unique points but, importantly, will result in N proposed points at every MCMC move in the SMC. The parameter N plays slightly different role to a standard SMC sampler, in which we might informally think of it as roughly the size of the Monte Carlo sample (in DA-ABC-SMC, U plays this role instead). In DA-ABC-SMC, N is one of four factors determining the "DA proposal" (i.e. the distribution of points that arrive at the second stage of DA), the remaining factors being: the choices of A and l 1 , and the distribution of the particles from the previous iteration of the SMC. A useful DA proposal has similar characteristics to a good independent MCMC or importance sampling proposal: we wish it to be close to the target distribution, with wider tails. Since the DA proposal depends on the previous distribution of the particles, we observe empirically that it can to some extent automatically "track" the target distribution over SMC iterations, and provide a useful proposal distribution at each SMC iteration. The extent to which this is the case is investigated in sections 4 and 5, where we illustrate the performance of our algorithm for different choices of the tuning parameters, which result in different sequences of DA proposals.
Algorithm 2 DA-ABC-SMC using early rejection. Inputs: As algorithm 1, and the number of particles A to pass the first stage of DA.
and weights w for all t.
Simulate v ∼ U [0, 1] N , to be used in resampling and use bisection to choose 2,t+1 s.t. there will be U unique particles after reweighting and resampling (using v). for , perform resampling using random numbers v, and let I = ∅. for end if end for Use bisection to choose 1,t+1 s.t. A particles satisfy d (y 1,t , x 1,t ) , d y 1,t , x * 1,t < 1,t . Let I be the set of indices of the particles that pass the first stage of DA.

Lotka-Volterra model
The Lotka-Volterra model is a stochastic Markov jump process that models the number of individuals in two populations of animals: predator and prey. It is a commonly used example in ABC since it is possible to simulate exactly from the model using the Gillespie algorithm (Gillespie, 1977), but the likelihood is not available pointwise. We follow the model described in Wilkinson (2011), in which X represents the number of predators and Y the number of prey. The following reactions may take place: a prey may be born, with rate θ 1 Y , increasing Y by one; predator and prey may interact, with rate θ 2 XY , increasing X by one and decreasing Y by one; a predator may die, with rate θ 3 X, decreasing X by one. Papamakarios and Murray (2016) note that for most parameters the size of one population quickly decreases to zero (in the case of the predators dying out, this results in the prey population growing exponentially). The relatively small region of parameter space that contains parameter values resulting in oscillating population sizes makes this a relatively challenging inference problem.
In this section we use this example to demonstrate the use of DA-ABC-SMC for SDE models that need to be simulated numerically. To do this, we use the chemical Langevin equation approximation to the Markov jump process, as detailed in Golightly et al. (2015). This results in two coupled non-linear SDEs, which we simulate numerically using the Euler-Maruyama method.

Results
All of our empirical results were generated using R (R Core Team, 2019), and the R packages ggplot2 (Wickham, 2016), matlab (Roebuck, 2014) and mvtnorm (Genz and Bretz, 2009) were used. We study the data "LVPerfect" in the R package smfsb (Wilkinson, 2018) (the numerical methods for simulating from the likelihood are also taken from this package). This data, which was generated using parameter values (θ 1 = 1, θ 2 = 0.05, θ 3 = 0.6), has been previously studied in Wilkinson (2011), and we use the same priors and ABC approach as in this reference. We used DA-ABC-SMC with a variety of choices of U , A and N , and two different choices of the Euler-Maruyama step size s in the cheap simulator s = 0.5 and s = 0.1, both of which result in very rough approximations of the dynamics. We compared these approaches with standard ABC-SMC, with N = 200 particles and a sequence of tolerances selected such that U = 100 unique particles are retained at each iteration, an SMC 2 -style approach (Duan and Fulop, 2015) (that uses M particles when using a particle filter to estimate the likelihood) and also, as black horizontal lines, "ground truth" for the posterior expectation and standard deviation of the parameters found using a long run (10 5 iterations) of ABC-MCMC. All algorithms we run 30 times, and measure computational cost, we counted the total number of steps S (taking the medianS over the 30 runs) simulated using Euler-Maruyama. The appendix contains the full details.    Figures 1 and 2 show, for every run of three methods, the evolution of the sample mean and standard deviation of the particles against the number of time steps of the numerical solver, with the final red dot indicating the estimate from the final distribution (where 2 = 0.15 was reached). The black line on each plots shows estimated ground truth values estimated using 20 runs of ABC-MCMC of length 5,000 each. We only show the results for θ 1 here, since the results for the other parameters are comparable in terms of how they illustrate the properties of the algorithms. The method on the left is DA-ABC-SMC with N = 500 and U = A = 100; in the middle is DA-ABC-SMC with N = 5000 and U = A = 100; and on the right is ABC-SMC with N = 200 and U = 100. We observe that whilst ABC-SMC takes significantly more steps to converge to the target distribution, the estimates of the mean and standard deviation have a relatively low variance. The DA-ABC-SMC approaches converge much faster, but result in estimates that are usually further from the ground truth. In addition, both DA-ABC-SMC approaches appear to underestimate the posterior standard deviation, with this effect becoming more pronounced when N is larger. This result suggests that choosing N too large (resulting in A/N being small) can result in a DA proposal that is too concentrated compared to the target. To further elaborate on this important point, as the ratio A/N decreases, the first stage of the DA leads to a very poor proposal -this is because the particles that make it through to the second stage of the DA step are those that have simulations that result in the very smallest distance to the observed data. The effect is to make the DA proposal too concentrated around the mode of the posterior yielded by the fast approximation.
Tables 1 and 2 summarise the properties of the estimates of the posterior mean and standard deviation from each method. The bias, standard deviation and root mean square error (RMSE) are reported, along with the RMSE×

√S
, to help compare the errors of methods with different computational costs. The results for some configurations are heavily influenced by poor results from a single run, for example the case of DA-ABC-SMC with N = 1000, U = 100, A = 100 and s = 0.1. However, based on these tables and the plots of the estimated posterior mean and standard deviation in the appendix, there is evidence to make the following conclusions.
• Choosing N too large (resulting in A/N being small) can result in a DA proposal that is too concentrated compared to the target, and leads to underestimating the posterior standard deviation.
• Even when N is not large, in this example often DA-ABC-SMC results in a sample that underestimates the standard deviation of the posterior distribution. The likely cause of this is that the DA proposal is too concentrated. This proposal easily generates unique points, thus the tolerance 2 is reduced too quickly, resulting in a poor quality sample. The results from DA-ABC-SMC when N = 1000, U = 200, A = 100 and s = 0.1, where 200 instead of 100 unique points are required when reducing the tolerance, suggests that when the SMC does not reduce the tolerance as fast (and hence more MCMC moves are made), the estimates of the posterior standard deviation are more accurate.
• DA-ABC-SMC can offer improved performance over ABC-SMC when measured by RMSE scaled by computational cost. This is largely due to its ability to quickly locate the region of highest posterior mass. However, it may not provide a representative sample from the posterior if the DA proposal is not tuned well. In contrast, compared to DA-ABC-SMC, standard ABC-SMC may be extremely slow in converging at all.  • The SMC 2 approaches tried here do not lead to an improvement over the ABC approaches, due to the low acceptance rate of the particle MCMC moves used in the move step of the SMC sampler (a consequence of the high variance likelihood estimates). In many cases no proposals were accepted at an SMC iteration, leading to a poor quality sample.

Background
This section concerns the class of likelihoods that may not be evaluated pointwise due to the presence of an intractable normalising constant, i.e.
where γ (y | θ) is tractable, but it is not computationally feasible to evaluate the normalising constant Z (θ) (also known as the partition function). Such a distribution is known as doubly intractable (Murray et al., 2006) since it is not possible to directly use the Metropolis-Hastings algorithm to simulate from the posterior π (θ | y), due to the presence of Z (·) in both the numerator and denominator of the acceptance ratio. The most common occurrence of these distributions occurs when l factorises as a Markov random field. The most well-established approaches to inference for these models are the single and multiple auxiliary variable approaches (Møller et al., 2006;Murray et al., 2006) and the exchange algorithm (Murray et al., 2006), which respectively use importance sampling estimates of the likelihood and acceptance ratio to avoid the calculation of the normalising constant. From here on we refer to these as "auxiliary variable methods". ABC (Grelaud et al., 2009) and synthetic likelihood  have also been used for inference in these models. See Stoehr (2017) for a recent, thorough review of the literature. Previous work, e.g. Friel (2013), suggests that auxiliary variable methods are more effective than ABC for simulating from the posterior π (θ | y) when l has an intractable normalising constant. In the full data ABC case,  suggest that this is because the multiple auxiliary variable approach may be seen to be a carefully designed non-standard ABC method. However, in this paper we consider the situation in Everitt (2012), where our data y are indirect observations of a latent (hidden) field x h , modelled with a joint distribution p (θ) l x h | θ g y | x h , θ with l x h | θ having an intractable normalising constant. In this case we might expect ABC to become more competitive, or even to outperform other approaches. The most obvious approach is to use data augmentation: using MCMC to simulate from the joint posterior π θ, x h | y by alternating simulation from the full conditionals of θ and x h , with the exchange algorithm in the θ update. Everitt (2012) compares this approach with "exchange marginal particle MCMC", in which an SMC algorithm is used to integrate out the x h variables at every iteration of an MCMC and finds the particle MCMC approach to be preferable. However, in order to perform well, both of these approaches require efficient simulation from the conditional distribution π x h | θ, y . Now compare these "exact" approaches with ABC. In this case, for every θ we simulate x h ∼ l (· | θ) and x ∼ g · | x h , θ , then use an ABC likelihood that compares S (x) with S (y). The x h variables corresponding to a particular θ variable that are retained in the ABC sample are distributed according to an ABC approximation to π x h | θ, S (y) ; indeed, in order that the ABC is at all efficient, there must be a reasonable probability that simulated x h is in a region of high probability mass under this distribution. Compared to particle MCMC: • Standard ABC has the disadvantage that the simulation of x h is, in contrast to particle MCMC, not conditioned on y (although we note recent work in Prangle et al. (2017) in which for some models we may also refine the ABC likelihood by conditioning on y).
• ABC has the advantage that a posteriori x h is only conditioned on some statistic S (y), rather than y as in particle MCMC. This condition is often considerably less stringent. For example, consider the Ising model example described below. When conditioning on y, the posterior π x h | θ, y is often restricted to relatively small regions of x h -space: for individual pixel of x h the posterior mass may be concentrated in a small region in order to match each individual data point. However, when conditioning on S (y), the posterior π x h | θ, S (y) may have non-negligible mass in many regions of x h -space: there are many different configurations of pixels that give rise to similar summary statistics.
In this paper we revisit using ABC on these models, examining the data previously studied in Everitt (2012), and show it is possible to make computational savings using DA-ABC-SMC. We now introduce the cheap simulator that makes this improvement possible. In Markov random field models, exact simulation from l (· | θ) is either very expensive or intractable. Grelaud et al. (2009) proposes to replace the exact simulator with taking the last point of a long MCMC run, and Everitt (2012) shows that, under certain ergodicity conditions, as the length of this "internal" MCMC goes to infinity, the bias in our ABC posterior sample due to this particular approximation also goes to zero (following ideas in Andrieu and Roberts (2009)). However, empirically one finds that often using only a single MCMC iteration is sufficient to yield a posterior close to the desired posterior. In this paper we propose to use an internal MCMC run of a single iteration as the cheap simulator, and to run the internal MCMC for many iterations (we use 1,000) as the expensive simulator, where the final iteration of the cheap simulator is used as the first iteration of the expensive simulator. The next section presents an application to latent Ising models, with an application to ERGMs in the appendix.

Latent Ising model
An Ising model is a pairwise Markov random field model on binary variables, each taking values in {−1, 1}. Its distribution is given by where θ x ∈ R, x h i denotes the i-th random variable in x h and N is a set defining pairs of nodes that are "neighbours". We consider the case where the neighbourhood structure is given by a regular 2-dimensional grid. Our data y are noisy observations of this x h field: the i-th variable in y has distribution as in Everitt (2012) (with the normalising constant being intractable). We used independent Gaussian priors on θ x and θ y , each with mean 0 and standard deviation 5. Our DA-ABC-SMC algorithm used U = A = 100, and we examined different values of N . A single site Gibbs sampler was used to simulate from the likelihood, with the expensive simulator using the final point of 1,000 sweeps of the Gibbs sampler. The MCMC proposal was taken to be a Gaussian distribution centred at the current particle, with covariance given by the sample covariance of the particles from the previous iteration. We used a single statistic in the ABC: the number of equivalently valued neighbours S (y) = (i,j)∈N y i y j (noting that this is not sufficient, but that the particle MCMC results in Everitt (2012) indicate that this does not have a large impact on the posterior). The distance metric used in ABC on the summary statistics was the absolute value of the difference. We used the same 10 × 10 grid of data as was studied in Everitt (2012), shown in figure 3a, which was generated from the model using θ x = θ y = 0.1. These parameter values represent a relatively weak interaction between neighbouring pixels, and also quite noisy observations. On a grid of this size, there is ambiguity as to whether this data may have been generated with either one or both of θ x and θ y being small, giving a posterior distribution shaped like a cross, similar to that shown in figure 3c. Figure 3c gives an example of a DA proposal, i.e. points that pass the first stage of the DA-MCMC, from the early stages of a run of DA-ABC-SMC. We observe how this distribution would be a suitable independent MCMC proposal for the posterior.
All of our empirical results were generated using R (R Core Team, 2019), and the R packages ggplot2 (Wickham, 2016), matlab (Roebuck, 2014) and mvtnorm (Genz and Bretz, 2009) were used. We ran DA-ABC-SMC for several values of N , and compared the results with standard ABC-SMC using all of the same algorithmic choices, with N = 200 and U = 100. The standard ABC-SMC algorithm used the expensive simulator, i.e. 1,000 iterations of the internal Gibbs sampler. In algorithms we terminated the method when 2 = 0, or after 500 SMC iterations, whichever was sooner. We focus on studying the total computational effort expended in each algorithm, measured by the total number of sweeps S of the internal Gibbs sampler, and on the mean and standard deviation of the Euclidean distance ρ of the parameter vector (θ x , θ y ) from the origin.
We ran the algorithm with several different values of N , and two different cheap simulators: the first where only a single sweep of the Gibbs sampler is used (B = 1); the second using the final point of 5 sweeps of the Gibbs sampler (B = 5). Each configuration was run 30 times. We chose N such that in most cases the computational cost was dominated by the expensive simulations in the second stage of the DA, although when N = 50, 000 and B = 5 the cost of the first stage dominated. Figure 3b shows gives an example showing the evolution of 2 against the total number of sweeps of the Gibbs sampler for several different configurations. We see that all configurations of the DA approach appear to have a significantly lower cost than standard ABC-SMC: in all cases the DA approach reaches 2 = 0 whereas the ABC-SMC reaches 2 = 2. Table 3 shows properties of estimates of the mean and standard deviation posterior on ρ. The estimates from ABC-SMC are based on the output of the 500th iteration of the SMC, where in all runs 2 = 2. we observe that we obtain lower variance estimates from ABC-SMC than from DA-ABC-SMC, but at a higher cost. This cost may be significantly higher if we wish to reduce 2 to 0, as is achieved in all cases by DA-ABC-SMC. As in the previous section, we note that using a large value of N appears to lead to poor results; it appears likely that the standard deviation of the posterior is being underestimated.

Method
Mean sd sd×  Table 3: The mean and standard deviation of the estimated posterior mean (columns 2-4) and standard deviation (columns 5-7) of ρ, and the median numberS of sweeps of the Gibbs samplerS. All rows refer to DA approaches, except the last, which is standard ABC-SMC.  14 In this paper we introduced DA-ABC-SMC as a means for reducing the computational cost of ABC-SMC in the case of an expensive simulator, through using a DA-ABC-MCMC move, in which a cheap simulator is used to "screen" the candidate values of proposed parameters so that less effort needs to be used running the expensive simulator. This cheap simulator may be completely independent of the expensive simulator, but preferably the expensive simulator may be conditioned on the output of the cheap simulator. The tolerance for the cheap simulator is chosen adaptively with DA-ABC-SMC, giving an algorithm that only has three relatively easy to choose tuning parameters. The key factor in the performance of the algorithm is the DA proposal, i.e. the distribution of the particles that pass the first stage of DA. For this proposal to be useful, the cheap simulator needs to produce a posterior centred around the same values as the expensive simulator, and the total number of particles N in the DA-ABC-SMC needs to be chosen appropriately. The empirical results suggest that there is a trade-off in choosing the size of N : large values result in the largest computational saving, but can produce a DA proposal that is too concentrated which can result in a poor sample from the posterior. For smaller values of N the DA proposal is usually more suitable, and can result in computational savings compared to ABC-SMC. Section 4 illustrates that DA-ABC-SMC shows promise for using ABC for inference in ODE or SDE models, where a numerical method is used to simulate from the model. Section 5 revisits the idea of using ABC for inference in latent Markov random field models, and suggests an approach that in some cases reduces the computational cost of ABC-SMC by using short runs of MCMC to simulate from the model. Future work might extend the approach to ABC methods that do not use an indicator kernel, use the adaptive tuning of the first stage of DA outside of the ABC context (along with an investigation of any bias this adaptation may introduce), or aim to improve the automation of the design of the DA proposal, which is often too concentrated.

A ABC-MCMC and ABC-SMC
(omitting the dependence of the target on to keep the notation consistent throughout the paper). ABC-MCMC uses the proposal q (θ * | θ) l (x * | θ * ) on the pair (θ, x), giving the acceptance probability The cancellation of the likelihood terms allows this algorithm to be implemented.

A.1.1 Indicator kernels and early rejection
In ABC a common choice for the kernel P is P (y | x) ∝ I (d (y, x) <= ), where d is a distance metric. We now consider the implications of using this kernel on the ABC-MCMC acceptance probability in the preceding section. Firstly, we must consider the possibility that the denominator in the ratio is equal to 0, due to having d (y, x) > .
The general MH framework of Tierney (1998) (see the main paper) dictates that the acceptance probability in this case should be 0: the practical implications of this are that one must ensure that the initial value of x satisfies d (y, x) <= , or the chain will never move from its starting point (in practice instead often different values of (θ, x) are explored until d (y, x) <= is satisfied, with this initial exploration being discarded). Therefore, we may always assume that d (y, x) <= after the chain is initialised and hence the acceptance probability may be written as Let u be the uniformly distributed random number in [0, 1] generated in order implement the accept-reject step. Picchini and Forman (2016) note that a rejection will always occur if thus this condition may be checked before x * is simulated from the likelihood. If the proposed point is not rejected after this first step, x * is simulated, and the proposal is accepted if d (y, x * ) <= . The consequence of this idea is that for θ * that have a small probability under the prior, we have an "early rejection" that avoids the (potentially expensive) cost of simulation. The computational savings of this approach will only be significant if the posterior is not too different from the prior. For clarity, pseudo-code for the early rejection method is given in algorithm 3.

Algorithm 3 A single iteration of early rejection ABC-MCMC.
Inputs: Current value of θ.
Outputs: Proposed value θ * and accept/reject decision for this proposed value.

A.2 ABC-SMC
Recall from the main paper the weight update used in an SMC sampler (Del Moral et al., 2006) when using a sequence of targets π t , an MCMC move for the "move" step, and where the SMC sampler backwards kernel is the reverse of this MCMC kernel:w The sequence of targets used in the ABC-SMC sampler of Del Moral et al. (2012a) is given by for t = 1 : T . We saw in the previous section that the ABC-MCMC kernel is a valid MCMC kernel targeting π t (θ t , x t | y). Choosing the SMC sampler backwards kernel to be the reverse of this MCMC kernel we obtaiñ 16 In the case of indicator kernels, this weight update becomes and early rejection may be used in the ABC-MCMC move. Early rejection may provide a significant computational saving in the early stages of the SMC, since the target is likely to be close to the prior.

B.1 DA-ABC-MCMC
We present a derivation of DA-ABC-MCMC, using the notation from the main paper. As in the previous sections, the extended state space view of ABC-MCMC is used, where the move is seen to be a Metropolis-Hastings move on the space (θ, x 1 ), with θ * proposed via q (· | θ) and x * 1 via l 1 (· | θ * ). We may view the move as being on the space (θ, x 1 , x 2 ), with target proportional to We will see that in practice this simulation does not need to be performed at the first stage (this construction is essentially the same as in Sherlock et al. (2017)). The acceptance probability at the first stage is We observe that the marginal distribution of the target we have used is the ABC posterior with l 1 , 1 and y 1 . Using delayed acceptance, as described in the main paper, the acceptance probability at the second stage is .

B.2 DA-ABC-SMC
We now justify the weight update for the DA-ABC-SMC method described in the main paper. The target distribution used at iteration t is proportional to Using the same approach as in section A.2, we see that the weight update for each particle is given bỹ

B.2.1 Using indicator kernels
In this section we consider the situation when P 1,t is chosen to be an indicator function; i.e. P 1,t y 1,t | x * 1,t ∝ I d y 1,t , x * 1,t <= 1,t , where d is a distance metric. In this case when specifying our acceptance probabilities we need to account for our target distributions having zero density in some parts of the space. We follow the framework of Tierney (1998), in which for a target π(θ) and proposal q, the MH acceptance probability is written as Thus, at the t-th iteration of the SMC the acceptance probability at the first stage of the delayed acceptance is otherwise. .

(23)
As in Picchini and Forman (2016), we may perform the first stage of delayed acceptance in two stages, which we will refer to as steps 1a and 1b, such that some simulations from l 1 may be avoided. At step 1a, θ * t is simulated from q (· | θ t ) and an accept-reject step is performed using the acceptance probability .
At step 1b, x * 1,t is simulated from l 1 (· | θ * t ) and the entire move θ * t , x * 1,t is accepted (to be used in stage 2) with probability Splitting the first stage into two substages could itself be seen as a form of delayed acceptance, but its acceptance rate is the same as the single step implementation since it simply uses the fact that I d y 1,t , x * 1,t < 1,t is either 1 or 0 in order to reorganise the single step calculation in a computationally efficient way.
The acceptance probability at the second stage is which may be seen directly from the description of DA from the main paper with the appropriate choices of π 2 and K 1 . Note that d (y, x 2,t ) < 2,t must be true for the particle to have non-zero weight, and d (y 1,t , x 1,t ) , d y 1,t , x * 1,t < 1,t must be satisfied to reach the second stage of DA, thus in practice we use α 2,t = 1 d y, x * 2,t < 2,t , 0 otherwise.

C.1 Full details of methods
All of our empirical results were generated using R (R Core Team, 2019). We study the data "LVPerfect" in the R package smfsb (Wilkinson, 2018) (the numerical methods for simulating from the likelihood are also taken from this package), previously studied in Wilkinson (2011). In this data the simulation starts with initial populations X = 50 and Y = 100, and has 30 time units, with the values of X and Y being recorded every 2 time units, resulting in 16 data points in each of the two time series. Our prior follows that in Wilkinson (2011), being uniform in the log domain. Specifically we use Our ABC approach follows that in Wilkinson (2011);Papamakarios and Murray (2016): as summary statistics we use a 9-dimensional vector composed of the mean, log variance and first two autocorrelations of each time series, together with the cross-correlation between them. These statistics were normalised by the standard deviation of the statistics determined by a pilot run, precisely as in Wilkinson (2011). The distance between the summary statistics used in ABC was taken to be the Euclidean norm between the normalised statistic vectors. In all our ABC algorithms we used a final tolerance log( 2 ) = log(0.15) ≈ −1.89712. Reducing the tolerance below this level does not appear to have a large impact on the posterior distribution.
We used DA-ABC-SMC with a variety of choices of U , A and N , and two different choices of the Euler-Maruyama step size s in the cheap simulator s = 0.5 and s = 0.1, both of which result in very rough approximations of the dynamics. We compared these approaches with standard ABC-SMC, with N = 200 particles and a sequence of tolerances selected such that U = 100 unique particles are retained at each iteration, and also "ground truth" for the posterior expectation and standard deviation of the parameters found using a long run (10 5 iterations) of ABC-MCMC. We also compared our approach with a method based on the SMC 2 -style approach of Duan and Fulop (2015). This approach uses the same SMC-based likelihood estimate (with M particles) as particle MCMC (Wilkinson, 2011), but embeds this within an SMC sampler rather than a pseudo-marginal MCMC chain. The sequence of distributions in the "external" SMC sampler is given by raising the likelihood estimate to a power: beginning with 0 and ending with 1 (so that the final distribution is the true posterior). The posterior targeted by this method is the same as in particle MCMC. We used the same model as in the particle MCMC of Wilkinson (2011): specifically we used a normal distribution with mean 0 and standard deviation 10 as the measurement model at each time step. Wilkinson (2011) shows that the posterior obtained using this model has a much smaller standard deviation compared to the one obtained when using ABC, therefore when comparing the SMC 2 approach with ABC, we only compared the posterior mean (bearing in mind that this is also slightly different between the two cases). In order that the computational cost is comparable with the ABC approaches, we use a DA move within the method of Duan and Fulop (2015). Full details of this method follow.
Algorithm 4 gives the SMC sampler of Duan and Fulop (2015). This method is adapted so that the sequence of powers to which the likelihood estimates are raised is determined adaptively, by using a bisection search to find the power such that the conditional effective sample size (CESS) (Zhou et al., 2016) is αN (where α is a proportion). The CESS is defined as where ω (i) is the incremental weight for the ith particle (the factor by which we multiply w Algorithm 4 The SMC sampler of Duan and Fulop (2015), with adaptation to choose the sequence of distributions.
Inputs: Number of particles N , the proportion α used in the adaptive approach to choosing the sequence of distributions, the proportion β used in resampling, prior p, particle filtering parameters for estimating l (including the number of particles M ).
and weights w for all t.
for i = 1 : N do θ (i) 0 ∼ p (·) Run a particle filter to find the likelihood estimatel Run a particle filter to find the likelihood estimate l (i) When applying algorithm 4 to the Lotka-Volterra data, we found that in order for the SMC to avoid degeneracy (and give a posterior near to the true posterior), it required a configuration (in terms of choosing appropriate N , α and β) that resulted in a computational cost of more than an order of magnitude slower than the ABC approaches. Due to this, we used a delayed acceptance MCMC move in place of the particle MCMC move given in algorithm 4. Algorithm 5 gives the resultant algorithm. In this approach, analogous to our description of DA-ABC-SMC, l 2 = l is the true likelihood to be estimated using a particle filter, and l 1 is a computationally cheap likelihood. Algorithm 5 was the method used in the main paper, with β = 0.5 in all cases, and different values of N , M and α. We note that the SMC 2 method of Chopin et al. (2013) was also tried, but was not found to be competitive in terms of the computational effort required to avoid degeneracy.

20
Algorithm 5 The SMC sampler of Duan and Fulop (2015), with adaptation to choose the sequence of distributions and a DA-MCMC move.
Inputs: Number of particles N , the proportion α used in the adaptive approach to choosing the sequence of distributions, the proportion β used in resampling, prior p, particle filtering parameters for estimating l 1 and l 2 = l (including the number of particles M ).
and weights w for all t.
Run particle filters to find the likelihood estimatesl Run a particle filter to find the likelihood estimate l 1 then Run a particle filter to find the likelihood estimate l 2 All algorithms were run 30 times, and used an expensive simulator with Euler-Maruyama step size 0.0005 (which resulted in a very accurate approximation), and included the scheme of Picchini and Forman (2016) to avoid simulations from the likelihood where they may be rejected using the prior only. In all approaches the MCMC proposal was Gaussian centred at the current point with variance given by the sample variance of the previous particles. To measure computational cost, we counted the total number of steps S (taking the medianS over the 30 runs) simulated using Euler-Maruyama, taking into account that some simulations were cut short due to the numerical solver diverging (in the implementation in the smfsb package, the practical result of this is that after a certain point in the simulation, the size of the populations is assigned "NaN"). When the simulation diverged, both population sizes were assigned to be zero after the time of the divergence.

C.2 Results
The R packages ggplot2 (Wickham, 2016), matlab (Roebuck, 2014) and mvtnorm (Genz and Bretz, 2009) were used when generating the results for this section. Our first observation is with the parameters N = 200 and U = 100, ABC-SMC sometimes had difficulty converging to the final tolerance. Of the 30 runs, 4 runs were not close to reducing the log tolerance to 0.15 (for some runs this was the case after more than 20,000 SMC iterations). This was not observed for any other approach. In order to present comparisons between ABC-SMC and the other approaches, we focus on median rather than mean simulation times (reported in table 4). We truncated the unfinished runs to 5,000 SMC iterations and treat them as if they had finished, but to provide a fair comparison we also present results where these runs are excluded.     q q q q q q qq q q q q q q q q q q q q q q q q q 0.0 0.4 0.8  q q q q q q q q q q q q q q q q q q q q q q q q q 0.0  q q q q q q q qq q q q q q q q q q q q q q q q q   q q q q q q q q q q q q q q q q q q q q q q q q q 0.0 0.5  q q q q q q q qq q q q q q q q q q q q q q q q q   q q q q q q q q q q q q q q q q q q q q q q q q q 0 1 2 3 0.0e+00 5.0e+09 1.0e+10 1.5e+10 2.0e+10 2.5e+10

MethodS
Number of time steps

D Latent exponential random graph model
An exponential random graph model (ERGM) is a model for network data in which the global network structure is modelled as having arisen through local interactions. In this section we consider the situation in which the network is not directly observed, thus x h is a hidden network made up of a random variable for each edge which takes value 1 if the edge is present and 0 if it is absent, and y is a noisy observation of this network. The ERGM on x h is with an intractable normalising constant, and our noisy observations are modelled by where the normalising constant is tractable. We studied the Dolphin network (figure 11a) (Lusseau et al., 2003), as also analysed in Caimo and Friel (2011) where the network is treated as directly observed, and used the same summary statistics and priors as in this paper. The igraph package (Csardi and Nepusz, 2006) was used to load  this data into R. We used the statistics geometrically weighted edgewise shared partner with φ u = φ v = 0.8, the prior on θ x = (θ 1 , θ 2 , θ 3 ) and θ y was (θ 1 , θ 2 , θ 3 , θ y ) ∼ N (0, 30I 4 ); and we used the Euclidean distance to compare simulated with observed statistics. The ergm package (Hunter et al., 2008) in R was used to simulate from l (· | θ x ), which uses the "tie no tie" (TNT) sampler and the expensive simulator used 15, 000 iterations. Our DA-ABC-SMC algorithm used U = A = 100, and again the MCMC proposal was taken to be a Gaussian distribution centred at the current particle, with covariance given by the sample covariance of the particles from the previous iteration. We ran DA-ABC-SMC for N = 1, 000, and a cheap simulator having B = 1, 500 (after exploratory runs suggested that this would be enough iterations to provide a useful DA proposal) and compared the results with standard ABC-SMC with the same configuration as in the previous section (both using 3 × 10 8 iterations of the TNT sampler). Figure 11b shows the results from the two algorithms, this time showing the sequence 1,t alongside the sequence 2,t . Again we observe that the tolerance in DA-ABC-SMC reduces faster than standard ABC-SMC, and that the tolerance 1,t changes adaptively. This data has not previously been studied using a latent ERGM. Using particle MCMC as in Everitt (2012) would require at every MCMC iteration to run an SMC sampler to integrate out the latent ERGM space, which consists of 1891 binary edge variables. We might expect that many SMC particles would be required to produce low variance marginal likelihood estimates, leading to a high computational cost. However, the acceptance rate was very low towards the end of our ABC runs, suggesting that a very large computational cost would be required to reduce 2,t to be close to zero.