Nonparametric adaptive estimation of order 1 Sobol indices in stochastic models, with an application to Epidemiology

Global sensitivity analysis is a set of methods aiming at quantifying the contribution of an uncertain input parameter of the model (or combination of parameters) on the variability of the response. We consider here the estimation of the Sobol indices of order 1 which are commonly-used indicators based on a decomposition of the output's variance. In a deterministic framework, when the same inputs always give the same outputs, these indices are usually estimated by replicated simulations of the model. In a stochastic framework, when the response given a set of input parameters is not unique due to randomness in the model, metamodels are often used to approximate the mean and dispersion of the response by deterministic functions. We propose a new non-parametric estimator without the need of defining a metamodel to estimate the Sobol indices of order 1. The estimator is based on warped wavelets and is adaptive in the regularity of the model. The convergence of the mean square error to zero, when the number of simulations of the model tend to infinity, is computed and an elbow effect is shown, depending on the regularity of the model. Applications in Epidemiology are carried to illustrate the use of non-parametric estimators.


Introduction
Sensitivity analysis is widely used for modelling studies in public health, since the number of parameters involved is often high (see e.g. [31,36] and references therein). It can be applied to a variety of problems, and we focus here on the question of evaluating the impact of input parameters on an output of a model. If we assume that the output of the model, y ∈ R, depends on p ∈ N input parameters x = (x 1 , ...x p ) ∈ R p through the relation y = f (x), we are interested here in evaluating how the parameter x , for ∈ {1, . . . , p} affects y. The vector x of the input parameters can be considered as a realisation of a set of random variables X = (X 1 , ...X p ), with a known distribution and with possibly correlated components. Also, sensitivity analyses in epidemiology deal with deterministic models although in many cases, randomness and nuisance parameters have to be included, which is one of the goal of the present paper.
In public health, most of the studies on sensitivity analysis are performed by letting the input parameters vary on a deterministic grid, or by sampling all parameters from a prior probability distribution [3]. However, there exist other ways of measuring the influence of the inputs on the output. In this article, we are interested in Sobol indices [28], which are based on an ANOVA decomposition (see [27,16,17] for a review). Denoting by Y = f (X) the random response, the first order Sobol indices can be defined for ∈ {1, . . . , p} by This index represents the fraction of the variance of the output Y due to the input X . Several numerical procedures to estimate the Sobol indices have been proposed, in particular by Jansen [19] (see also [26,27]). These estimators, that we recall in the sequel, are based on Monte-Carlo simulations of (Y, X 1 . . . X p ). The literature focuses on deterministic relations between the input and output parameters. In a stochastic framework where the model response Y is not unique for given input parameters, few works have been done, randomness being usually limited to input variables. Assume that: where X = (X 1 , . . . X p ) still denotes the random variables modelling the uncertainty of the input parameters and where ε is a noise variable. In this paper, we will assume that f (and hence Y ) is bounded by M > 0. When noise is added in the model, the classical estimators do not always work: Y can be very sensitive to the addition of ε. Moreover, this variable is not always controllable by the user.
When the function f is linear, we can refer to [12]. In the literature, meta-models are used: approximating the mean and the dispersion of the response by deterministic functions allows to come back to the classical deterministic framework (e.g. Janon et al. [18], Marrel et al. [23]). We study here another point of view, which is based on the non-parametric statistical estimation of the term Var E[Y | X ] appearing in the numerator of (1.1). Approaches based on the Nadaraya-Watson kernel estimator have been proposed by Da Veiga and Gamboa [10] or Solís [29]. We propose here a new approach based on warped wavelet decompositions introduced by Kerkyacharian and Picard [20]. An advantage of these non-parametric estimators is that their computation requires less simulations of the model. For Jansen estimators, the number of calls of f required to compute the sensitivity indices is n(p+1), where n is the number of independent random vectors (Y i , X i 1 , . . . X i p ) (i ∈ {1, . . . n}) that are sampled for the Monte-Carlo procedure, making the estimation of the sensitivity indices time-consuming for sophisticated models with many parameters.
In Section 2, we present the non-parametric estimators of the Sobol indices of order 1 in the case of the stochastic model (1.2) and study their convergence rates. The approximation of Var E[Y | X ] is very important to obtain the speed of convergence. When the conditional expectation is estimated by a Nadaraya-Watson kernel estimator, these results have been obtained by Solís [29] and Da Veiga and Gamboa [10]. The use of wavelets for estimating the conditional expectation in Sobol indices is new to our knowledge. Wavelet estimators are more tractable than kernel estimators in that we do not have to handle approximations of quotients. We derive the convergence rate for the estimator based on wavelets, using ideas due to Laurent and Massart [21] who considered estimation of quadratic functionals in a Gaussian setting. Because we are not necessarily in a Gaussian setting here, we rely on empirical processes and use sophisticated technology developed by Castellan [5]. Contrarily to the kernel estimators for which convergence rates rely on assumptions on the joint distribution of Y and of X 1 , . . . X p , we have an upper-bound for the convergence rates that depend on the regularity of the output Y with respect to the inputs X 1 , . . . X p . Moreover, our estimator is adaptive and the exact regularity does not need to be known to calibrate our non-parametric wavelet estimator. Since we estimate covariance terms, we obtain elbow effects: there is a threshold in the regularity defining two different regimes with different speeds of convergence for the estimator. In our case, this allows us to recover convergence rates in 1/n when the model exhibits sufficient regularities. Further discussion is carried in the body of the article. These estimators are then computed and compared for toy examples introduced by Ishigami [15].
In Section 3, we then address models from Epidemiology for which non-parametric Sobol estimators have never been used to our knowledge. First, the stochastic continuous-time SIR model is considered, in which the population of size N is divided into three compartments: the susceptibles, infectious and removed individuals (see e.g. [1] for an introduction). Infections and removals occur at random times whose laws depend on the composition of the population and on the infection and removal parameters λ and µ as input variables. The output variable Y can be the prevalence or the incidence at a given time T for instance. Y naturally depends on λ, µ and on the randomness underlying the occurrence of random times. Second, we consider a stochastic multi-level epidemic model for the transmission of Hepatitis C virus (HCV) among people who inject drugs (PWID) that has been introduced by Cousien et al. [8,9]. This model describes an individual-based population of PWID that is structured by compartments showing the state of individuals in the heath-care system and by a contact-graph indicating who injects with whom. Additionally the advance of HCV in each patient is also taken into account. The input variables are the different parameters of the model. Ouputs depend on these inputs, on the randomness of event occurrences and on the randomness of the social graph. We compare the sensitivity analysis performed by estimating the Sobol indices of order 1 with the naive sensitivity analysis performed in [8,9] by letting the parameters vary in an a priori chosen windows.
In the sequel, C denotes a constant that can vary from line to line.

A non-parametric estimator of the Sobol indices of order 1
Denoting by V = E E 2 (Y | X ) , we have: which can be approximated by are the empirical mean and variance of Y . We can think of several approximations V of V , for example, based on Nadaraya-Watson and on warped wavelet estimators. At an advanced stage of this work, we learned that the Nadaraya-Watson-based estimator of Sobol indices of order 1 3 had also been proposed and studied in the PhD of Solís [29]. Using a result on estimation of covariances by Loubes et al. [22], they obtain an elbow effect. However their estimation is not adaptive, and requires the knowledge of the regularity of the joint density function of (X , Y ). For the warped wavelet estimator, we propose a model selection procedure based on a work by Laurent and Massart [21] to make the estimator adaptive.

Definitions
Our wavelet estimator is based on a warped wavelet decomposition of E(Y | X = x). Let us denote by L 2 (µ) the space of real functions that are square integrable with respect to the measure µ. When we do not specify µ, L 2 denotes the space of real functions that are square integrable with respect to the Lebesgue measure on R. In the sequel, we denote by f, g = R f (u)g(u)du, for f, g ∈ L 2 , the usual scalar product of L 2 . The associated L 2 -norm is f 2 2 = R f 2 (u)du. Wavelet estimators are projection estimators, and L 2 is a natural setting to work with. But when dealing with a probability framework, one can face the need to consider different Hilbert structures. Let now µ be a probability measure with cumulative distribution function G. Warped wavelet decompositions introduced by Kerkyacharian and Picard [20] allow, in a very natural way, to consider wavelet decompositions in L 2 (µ): composing any Hilbert basis of L 2 by G provides a Hilbert basis of L 2 (µ). See [6,20] for more details.
Let us denote by G the cumulative distribution function of X and let (ψ jk ) j≥−1,k∈Z be a Hilbert wavelet basis of L 2 . The wavelet ψ −10 is the father wavelet, and for k ∈ Z, ψ −1k (x) = ψ −10 (x−k). The wavelet ψ 00 is the mother wavelet, and for j ≥ 0, k ∈ Z, ψ jk (x) = 2 j/2 ψ 00 (2 j x− k). In the sequel, we will consider wavelets with compact support. The warped wavelet basis that we will consider is (ψ jk • G) j≥−1,k∈Z . Definition 2.1. Let us define for j ≥ −1, k ∈ Z, Then, we define the (block thresholding) estimator of S as (2.2) with

4)
where w(j) = K 2 j +log 2 n , J n := log 2 √ n (with [.] denoting the integer part), and K is a positive constant.
Let us present the idea explaining the estimator proposed in Definition 2.1. Let us introduce centered random variables η such that Notice that the sum in k is finite because the function h has compact support in [0, 1]. It is then natural to estimate h (u) by We can then rewrite V as: Adaptive estimation of h 2 2 has been studied in [21], which provides the block thresholding estimator V in Definition 2.1. The idea is: 1) to sum the terms β jk 2 , for j ≥ 0, by blocks {(j, k), k ∈ Z} for j ∈ {−1, . . . , J n } with a penalty w(j) for each block to avoid choosing too large js, 2) to cut the blocks that do not sufficiently contribute to the sum, in order to obtain statistical adaptation. Notice that V can be seen as an estimator of V resulting from a model selection on the choice of the blocks {(j, k), k ∈ Z}, j ∈ {−1, . . . , J n } that are kept, with the penalty function pen(J ) = j∈J w(j), Remark that the definition of the estimator and the penalization depend on a constant K through the definition of w(j). The value of this constant is chosen in order to obtain oracle inequalities. In practice, this constant is hard to compute, and can be chosen by a slope heuristic approach (see e.g. [2]).

Statistical properties
In this Section, we are interested in the rate of convergence to zero of the mean square error (MSE) E (S − S ) 2 . Lemma 2.2. Consider the generic estimator S defined in (2.2), where V is any estimator of V = E(E 2 (Y | X )). Then there is a constant C and an integer n 0 such that for all n ≥ n 0 , Proof. From (2.1) and (2.2), (2.12) The first term in the right hand side (r.h.s.) is in C/n for sufficiently large n. For the second term in the right hand side of (2.12): The first term in the r.h.s. is also in C/n, which concludes the proof.
When V is a Nadaraya-Watson estimator, Loubes et al. [22] established from Lemma 2.2 a control of the MSE that looks like the result we announce and comment in Corollary 2.6. Their result is based on (2.11) and a bound for E V − V 2 given by [22,Th. 1], whose proof is technical. Here, we consider the estimator V introduced in (2.4) and upper-bound the MSE. Our proof is much shorter than theirs, due to the nature of the estimators and to the techniques that we use. Let us introduce first some additional notation. For J ⊂ {−1, . . . , J n }, we define the projection h J , of h on the subspace spanned by {ψ jk , with j ∈ J , k ∈ Z} and its estimator h J , : We also introduce the estimator of V for a fixed subset of resolutions J : The estimators β jk and V J , have natural expressions in term of the empirical process γ n (dx) defined as follows: Definition 2.3. The empirical measure associated with our problem is: where δ a (dx) denotes the Dirac mass in a.
For a measurable function f , We also define the centered integral of f with respect to γ n (dx) as: Using the empirical measure γ n (dx), we have: Let us also introduce the correction term using (2.5), (2.6) and (2.18): Theorem 2.4. Let us assume that the random variables Y are bounded by a constant M > 0, and let us choose a father and a mother wavelets ψ −10 and ψ 00 that are continuous with compact support (and thus bounded). The estimator V defined in (2.4) is almost surely finite, and: for constants C and C > 0.
We deduce the following corollary from the estimate obtained above. Let us consider the Besov space B(α, 2, ∞) of functions h = j≥−1 k∈Z β jk ψ jk of L 2 such that For a h ∈ B(α, 2, ∞) and for its projection h J on Vect{ψ jk , j ∈ J = {−1, . . . J max }, k ∈ Z} (with J max = max J ), we have the following approximation result from [14,Th. 9.4].

Proposition 2.5 (Härdle Kerkyacharian Picard and Tsybakov).
Assume that the wavelet function ψ −10 has compact support and is of class Notice that Theorem 9.4 of [14] requires assumptions that are fulfilled when ψ −10 has compact support and is smooth enough (see the comment after the Corol. 8.2 of [14]).
Corollary 2.6. If ψ −10 has compact support and is of class C N for an integer N > 0 and if h belongs to a ball of radius R > 0 of B(α, 2, ∞) for 0 < α < N + 1, then As a consequence, we obtain the following elbow effect: If α < 1 4 , there exists a constant C > 0 such that The proof of Theorem 2.4 is postponed to Section 5. Let us remark that in comparison with the result of Loubes et al. [22], the regularity assumption is on the function h rather than on the joint density φ(x, y) of (X , Y ). The adaptivity of our estimator is then welcomed since the function h is a priori unknown. Remark that in application, the joint density φ(x, y) also has to be estimated and hence has an unknown regularity.
When α < 1/4 and α → 1/4, the exponent 8α/(4α + 1) → 1. In the case when α > 1/4, we can show from the estimate of Th. 2.4 that: to N 0, 4Var Y 1 h (G (X 1 )) by the central limit theorem, we obtain that: The result of Corollary 2.6 is stated for functions h belonging to B(α, 2, ∞), but the generalization to other Besov spaces might be possible.

Numerical tests on toy models
We start with considering toy models based on the Ishigami function, often chosen as benchmark: where X i are independent uniform random variables in [−π, π] (see e.g. [15,26]).
Case 1 -Ishigami model : first, we consider this model with (X 1 , X 2 , X 3 ) as input parameters and compute the associated Sobol indices. For the Ishigami function, all the Sobol sensitivity indices are known.
Case 2 -stochastic Ishigami model : following Marrel et al. [23], we consider the case where (X 1 , X 2 ) are the input parameters and X 3 a nuisance random parameter. The Sobol indices relative to X 1 and X 2 have the same values as in the first case.
In each case, we compare the estimator of the Sobol indices of order 1 based on the wavelet regressions with two other estimators: • the Jansen estimator, which is one of the classical estimator found in the literature (see [19,25] for Jansen and other estimators). This estimator is based on the mixing of two samples (X puplets distributed as (X 1 , . . . X p ): for the first order Sobol indices, ∀ ∈ 1, ..., p: • the estimator (2.2) defined with the choice of the Nadaraya-Watson regression estimator for V (e.g. [34]) instead of the wavelet estimator (2.4): . This provides the estimator: Notice that the estimations using Jansen estimators require (p + 1)n calls to f , which is in many real cases the most expensive numerically. To enable comparisons, we compute the nonparametric estimators of the S 's from samples of size (p + 1)n. We used n = 10,000. To obtain Monte-Carlo approximations of the estimators' distributions, we performed 1,000 replications from which we estimate the bias and MSE for each estimator. For the wavelet (resp. Nadaraya-Watson) estimator, we choose the constant K (resp. window h) by a leave-one-out cross validation procedure [34,Section 1.4]. For the wavelet estimator, we use the Daubechies 4 wavelet basis when implementing the wavelet estimator.

Case 1
The results are presented in Table 1. When comparing the MSE, the performances of the Jansen estimators are overall lower than the non-parametric estimators, but the bias is usually smaller. For X 1 and X 2 , the Nadaraya-Watson and wavelet estimators have comparable performances, but for X 3 the Nadaraya-Watson estimator performs better. This is due to the fact that the window h for this variable can be chosen large since the function to estimate is flat (see Figure 1). Table 1: Estimates of the bias and MSE for the parameters X 1 , X 2 and X 3 in the Ishigmami function, for 1,000 replications and n = 10,000 The results are presented in Table 2. As for Case 1, we see that in term of MSE, the non-parametric estimators overperform again the Jansen estimators. For X 1 , the Nadaraya-Watson and wavelet estimators have comparable statistics, but the wavelet estimator is the best for X 2 . simulations for the Ishigami function. The conditional expectation of Y knowing X 1 (resp. X 2 and X 3 ) is represented in line 1 (resp. 2 and 3). Table 2: Estimates of the bias and MSE for the parameters X 1 and X 2 in the Ishigmami function, when X 3 is considered as a pertubation parameter, for 1,000 replications and n = 10,000 Both non-parametric estimators depend on a tuning parameter: the window h for Nadaraya-Watson and the constant K for the wavelets. In Figure 2, the MSE are plotted as functions of the window h (for the estimator with Nadaraya-Watson) and of the constant K (for our estimator with wavelets). The performances of the wavelet estimator are much more stable with respect to the values of K on the stochastic Ishigami model (Case 2).
To conclude these simulations on the stochastic Ishigami model, we plotted on logarithmic scales the MSE as function of the sample size n: see Figure 3. It is seen that the wavelet estimator is better than the Jansen estimator. For the wavelet estimator, the slope estimated with ordinary least squares equals to −1.15 for X 1 and −1.12 for X 2 . This is in accordance with the value of −1 predicted by Corollary 2.6.
These results suggest that the proposed non-parametric estimator constitute an interesting alternative to the Jansen estimator, showing less variability and potentially requiring a lower number of simulations of the model, even in the deterministic setting of Case 1.

Sobol indices for epidemiological problems
We now consider two stochastic individual-based models of epidemiology in continuous time. In both cases, the population is of size N and divided into compartments. Input parameters are the rates describing the times that individuals stay in each compartment. These rates are usually estimated from epidemiological studies or clinical trials, but there can be uncertainty on their values due to various reasons. The restricted size of the sample in these studies brings  uncertainty on the estimates, which are given with uncertainty intervals (classically, a 95% confidence interval). Different studies can provide different estimates for the same parameters. The study populations can be subject to selection biases. In the case of clinical trials where the efficacy of a treatment is estimated, the estimates can be optimistic compared with what will be the effectiveness in real-life, due to the protocol of the trials. It is important to quantify how these uncertainties on the input parameters can impact the results and the conclusion of an epidemiological modelling study.

SIR model and ODE metamodels
In the first model, we consider the usual SIR model, with three compartments: susceptibles, infectious and removed (e.g. [1,4,11]). We denote by S N t , I N t and R N t the respective sizes of the corresponding sub-populations at time t ≥ 0, with S N t + I N t + R N t = N . At the population level, infections occur at the rate λ N S N t I N t and removals at the rate µI N t . The idea is that to each pair of susceptible-infectious individuals a random independent clock with parameter λ/N is attached and to each infectious individual an independent clock with parameter µ is attached.
The input parameters are the rates λ and µ. The outpout parameter is the final size of the epidemic, i.e. at a time T > 0 where I N T = 0, Y = (I N T + R N T )/N . It is possible to describe the evolution of (S N t /N, I N t /N, R N t /N ) t≥0 by a stochastic differential equation (SDE) driven by Poisson point measures (see e.g. [33]) and it is known that when N → +∞, this stochastic process converges in D(R + , R 3 ) to the unique solution (s t , i t , r t ) t≥0 of the following system of ordinary differential equations (e.g. [1,4,11,33]): We compute the Jansen estimators of S λ and S µ for the deterministic meta-model (3.1), with n = 30,000 simulations (n(p + 1) = 90,000 calls to the function f ) and choose these results as benchmark. For the estimators of S λ and S µ in the SDE, we compute the Jansen estimators with n = 10,000 (i.e. n(p + 1) = 30,000 calls to the function f ), and the estimators based on Nadaraya-Watson and on wavelet regressions with n = 30,000 simulations.
Let us comment on the results. The comparison of the different estimation methods is presented in Fig. 4. Since the variances in the meta-model and in the stochastic model differ, we start with comparing the distributions of E(Y | λ) and E(Y | µ) that are centered around the same value, independently of whether the meta-model or the stochastic model is used (Fig. 4(b)). These distributions are obtained from 1,000 Monte-carlo simulations. Because theoretical values are not available, we take the meta-model as a benchmark. We see that the wavelet estimator performs well for both λ and µ while Nadaraya-Watson regression estimator exhibit biases for λ. Jansen estimator on the stochastic model exhibit biases for both λ and µ.
We try to comment on the biases that are observed. When looking at Fig. 5, the simulations can give very noisy Y 's: extinctions of the epidemics can be seen in very short time in simulations, due to the initial randomness of the trajectories. This produces distributions for Y 's that are not unimodal or with peaks at 0, which makes the estimation of E(Y | λ) or E(Y | µ) more difficult. The wavelet estimator seems to cope well with this situation.
In a second time, we focus on the estimation of the Sobol indices for the stochastic model with the SDE (we leave out the deterministic meta-model for the reasons mentioned above). The smoothed distributions of the estimators of S λ and S µ , for 1,000 Monte-Carlo replications, are presented in Fig. 4(a); the means and standard deviations of these distributions are given in Table 3. Although there is no theoretical values for S λ and S µ , we can see (Table 3) that the estimators of the Sobol indices with non-parametric regressions all give similar estimates in expectation for µ. For λ, there are some discrepancies seen on Fig. 4(a) and Table 3.

Application to the spread of HVC among drug users
Chronic Hepatitis C virus (HCV) is a major cause of liver failure in the world, responsible of approximately 500,000 deaths annually [35]. HCV is a bloodborne disease, and the transmission remains high in people who inject drugs (PWID) due to injecting equipment sharing [32]. Until recently, the main approaches to decrease HCV transmission among PWID in high income countries relied on injection prevention and on risk reduction measures (access to sterile equipment, opioid substitution therapies, etc.). The arrival of highly effective antiviral treatments offers the opportunity to use the treatment as a mean to prevent HCV transmission, by treating infected PWID before they transmit the infection [13].
In this context, a stochastic, individual-based dynamic model was used to assess the impact of the treatment on HCV transmission in PWID in Paris area [8]. This model included HCV transmission on a random graph modelling PWID social network, the cascade of care of chronic hepatitis C and the progression of the liver disease. A brief description of the model for HCV infection and cascade of care is available in Fig. 6, for a detailed description and the values and 14 Greek letters refer to rates, p r and p SV R to probabilities and T a and T t to (deterministic) time before leaving the compartment. β depends on the status of the PWID with respect to the risk reduction measures (access to sterile injecting equipment, access to substitution therapies). n i denotes the number of infected injecting partners of the PWID. δ depends on the status of the PWID with respect to injection: active or inactive injector (i.e. before or after the cessation of injection). The liver disease progression is quantified by a score (score Metavir for the fibrosis progression) between F0 and F4 (cirrhosis). "Complications" refers to the two cirrhosis complications: decompensated cirrhosis and hepatocellular carcinoma uncertainty intervals of the parameters, the reader can refer to [8]. These parameters are the input of our model and we assume for them uniform distributions on their uncertainty intervals.
Here, Y is the prevalence after 10 years of simulation.
The parameter values used in this analysis were mainly provided by epidemiological studies and were subject to uncertainty. This kind of model requires high computing time, and thus the sensitivity analysis using Monte-Carlo estimators of Sobol indices is difficult, due to the number of simulations needed. Therefore, we focused on the seven main ones: infection rate per partner, transition rate F0/F1 > F2/F3, rate of linkage to care and LFTU, average time to diagnosis, average time to cessation, relative risk of infection (1st year), mortality among active PWID. Other parameters contributions to the variance was considered as negligible and we considered these parameters as noise in our estimates.
We estimate Sobol indices using the wavelet non-parametric estimator. We used n = 10,000 simulations of the model. We obtained unrealistic results using leave-one-out cross validation procedure to select the value of K in the estimators proposed in 2.1. However, keeping values β jk with j < 3 produce realistic estimates. Thus, we kept all these coefficients to produce the estimates.
For comparison, we also represented the sensitivity using a Tornado diagram, classically used in Epidemiology. To build the Tornado diagram, we first fix all the parameters but one to their values used in the analysis and we let the free parameter vary in an uncertainty interval. For each set of parameters thus obtained, the output Y is computed. Then, the parameters are sorted by decreasing variations of Y , and the deviation from the main analysis results is represented in a bar plot. We can compare the orders of the input parameters given by the Sobol indices and by the Tornado diagram.
The results are presented in Figure 7. Since the Sobol indices can be interpreted as the contribution of each parameter to the variance of Y , we can thus see that a large part of the variance of Y is explained by the infection rate per infected partner alone, with a Sobol index of 0.6, and by the transition rate from a fibrosis score of F0/F1 to a score of F2/F3, with a Sobol index of 0.55. Next comes the linkage to care/loss to follow-up rate.  input parameters obtained by the Sobol indices and the Tornado diagram (obtained in [8]) are in accordance for the main parameters. For the Tornado diagram, the most sensitive parameters (the infection rate per infected injecting partner, the transition rate from a fibrosis score of F0/F1 to a score of F2/F3 and the combination of the linkage to care/loss to follow-up rate) were also varied together to estimate the impact of the uncertainty about the linkage to care of PWID. The Tornado diagram, which explores a much smaller region of the parameter space by the way it is constructed, detects more noisy contributions for the other factors. This appears, in the Tornado, in the group of parameters having similar Sobol indices (average time to diagnosis and cessation, relative risk of infection, mortality, F2/F3>F4).

Conclusion
Sensitivity analysis is a key step in modelling studies, in particular in epidemiology. Models often have a high number of parameters, which are often seen as degrees of freedom to test scenarii and take into account several interplaying phenomena and factors. The computation of Sobol indices can indicate, among a long list of input parameters, which ones can have an important impact on the outputs. The classical estimators, like the Jansen estimator, require a large amount of requests to the function f that generates the output from the inputs. The reason is that the Sobol indices are approximated, in these cases, by quantities involving imbricated 16 sums where parameters vary one by one. The literature on sensitivity analysis focuses on outputs that depend deterministically on the inputs. When there is randomness, it is natural to propose new approximations based on non-parametric estimations that require a lower number of calls to f since information brought by simulations with close input parameters can also be used. No meta-model is requested. Numerical study on toy models show that these estimators can be used in deterministic settings too.
Independently and at the same time as us [7], Solís [29,30] introduced an estimator of the Sobol indices of order 1 based on Nadaraya-Watson regressions. We hence focus in this paper on an estimator of the Sobol indices based on wavelet decompositions. For both of them, an elbow effect is proved: under sufficient regularities, convergence rates of order 1/ √ n can be achieved. On numerical toy examples, we obtained a better MSE with the wavelet estimators than with the Jansen estimator of same complexity. The non-parametric estimators allow a better exploration of the parameter space: for each simulation, the whole set of input parameters is drawn afresh. Compared with the Nadaraya-Watson estimator, the wavelet estimator is adaptative, which means that the unknown regularity of the model underlying the data does not need to be known to calibrate the estimator. On simulations, our estimator behaves similarly with Nadaraya-Watson estimator. When well-calibrated they can overcome some smoothing biases that can appear when the output is very noisy, which is the case in epidemic scenarii where there can be either large outbreaks or quick extinction due to stochasticity, for example.
Notice also that our proofs in the present paper are much shorter than the proofs needed to study the estimator based on Nadaraya-Watson regression. First, the wavelet estimator is a projection estimator and the difficulties related with the fact that there is a fraction in the Nadaraya-Watson estimator disappear. Second, we use elegant techniques (developed independently from sensitivity analysis) on empirical processes and concentration inequalities due to Castellan [5] to adapt the results of Laurent and Massard in the Gaussian case [21].
This first order index S corresponds to the sensitivity of the model to X alone. Higher order indices can also be defined using ANOVA decomposition: considering ( , ) ∈ {1, . . . , p}, we can define the second order sensitivity, corresponding to the sensitivity of the model to the interaction between X and X index by We can also define the total sensitivity indices by These indices allow to assess 1) the sensitivity of the model to each parameter taken separately and 2) the possible interactions, which are quantified by the difference between the total order and the first order index for each parameter. Estimation of higher order indices using nonparametric techniques would be an interesting subject for further researches.

Proof of Theorem 2.4
We follow the scheme of the proof of Theorem 1 in [21]. The main difficulty here is that we are not in a Gaussian framework and that we use the empirical processγ n , which introduces much technical difficulties.
In the sequel, C denotes a constant that can vary from line to line. Using Lemma 2.2, we concentrate on the MSE E ( V − V ) 2 . First, we will prove that: where V J , has been defined in (2.16). The penalization term associated to a subset J ⊂ {−1, . . . J n } has been defined in (2.9). Then, considering the first term in the r.h.s. of (5.1), we prove: Step 1: Using this, we start by rewriting Thus: The first term in the r.h.s. is treated in Step 2, and the second term in Step 3. After summation over J ⊂ {−1, . . . , J n }, this provides an upper bound for the first term in the r.h.s. of (5.3) which provides (5.1).
Step 2: Upper bound of the first term in the r.h.s. of (5.8) The first term in the r.h.s. of (5.7) is the approximation error of h J by h J , and equals To control it, let us introduce, for coefficients a = (a jk , −1 ≤ j ≤ J n , k ∈ Z), the set Then, to upper bound the first term in (5.8), we can write: where, for χ n (J ) defined in (5.9), The upper bounds of A 1 (J ) and A 2 (J ) make the object of the remainder of Step 2. We use ideas developed in [5].
Upper bound for A 1 (J ) To upper bound A 1 (J ), we use the identity and look for deviation inequalities of χ 2 n (J )1l Ω J (ρ) . Then, estimates of the probability of Ω c J (ρ) are studied to control A 2 (J ).
We can then use Talagrand inequality (see [24, p.170]) to obtain that for all η > 0 and x > 0, where the quantities ν n and b n can be chosen respectively as ν n = M 2 /n and b n = 2M ψ ∞ ρCard(J )/nz. Indeed, ν n is an upper bound of: where the last inequality comes from the definition of Λ J and F 1,J .
As for the term b n , it is an upper bound of: if f = j∈J k∈Z a jk ψ jk . For the expectation appearing in the probability in the r.h.s. of (5.16), we have: by the fact that the wavelets ψ jk have compact supports and satisfy ψ jk 2 2 = 1. The constant C in (5.19) is the number of wavelets (ψ 0k ) k∈Z that intersect [0, 1]. Thus for a given j ≥ 0, the number of wavelets (ψ jk ) k∈Z that instersect [0, 1] is of order 2 j C .
Let us consider the square bracket in (5.31). Recall (5.30). Because Card(J ) ≤ J n = log 2 √ n), x J /n converges to zero when n → +∞ and it is possible to choose a constant K and n 0 sufficiently large such that for all n ≥ n 0 , where we recall that pen 2 (J ) has been defined in (5.5). Then, this yields P γ 2 n ϕ J − pen 2 (J ) ≥ r n (ξ) ≤ e −x J e −ξ . (5.33) From this, we deduce that The last inequality comes from the behaviour of the integrand when t is close to 0. Gathering the results of Steps 1 to 3, we have by (5.11) and (5.8) that the first term in the r.h.s. of (5.3) is smaller than C log 2 2 (n)/n 3/2 . This proves (5.1).

Step 4:
Let us now consider the term E − V J , + pen(J ) + V + ζ n For the second term in the r.h.s. of (5.36), we have: (5.37) by using that 2ab ≤ a 2 + b 2 for the last inequality. The third term in the r.h.s. of (5.36) has been treated in (5.11) previously. We established an upper bound in Card 2 (J )/n 2 (see (5.23)). For the fourth term, pen 2 2 (J ) = K 2 log 2 (2)Card 2 (J )/n 2 from (5.5). Gathering these results, we obtain (5.2) and then (2.21).
For h in a ball of radius R, h 2 2 ≤ R 2 , and we can find an upper bound that does not depend on h. Because the last term in (5.39) is in 1/n, the elbow effect is obtained by comparing the order of the first term in the r.h.s. (n −8α/(4α+1) ) with 1/n when α varies.

A Sobol indices
The Sobol indices are based on the following decomposition for f (see Sobol [28]). We recall the formulas here, with the notation X p+1 for the random variable ε: Y =f (X 1 , . . . , X p , ε) f 1 2 (X 1 , X 2 ) + · · · + f 1,...,p+1 (X 1 , . . . , X p , ε) Then, the variance of Y can be written as: 3) The first order indices are then defined as: S corresponds to the part of the variance that can be explained by the variance of Y due to the variable X alone. In the same manner, we define the second order indices, third order indices, etc. by dividing the variance terms by Var(Y ).