Skip to main content
Log in

Kernel-based Measures of Association Between Inputs and Outputs Using ANOVA

  • Published:
Sankhya A Aims and scope Submit manuscript

Abstract

ANOVA decomposition of a function with random input variables provides ANOVA functionals (AFs), which contain information about the contributions of the input variables on the output variable(s). By embedding AFs into an appropriate reproducing kernel Hilbert space regarding their distributions, we propose an efficient statistical test of independence between the input variables and output variable(s). The resulting test statistic leads to new dependence measures of association between inputs and outputs that allow for i) dealing with any distribution of AFs, including the Cauchy distribution, ii) accounting for the necessary or desirable moments of AFs and the interactions among the input variables. In uncertainty quantification for mathematical models, a number of existing measures are special cases of this framework. We then provide unified and general global sensitivity indices and their consistent estimators, including asymptotic distributions. For Gaussian-distributed AFs, we obtain Sobol’ indices and dependent generalized sensitivity indices using quadratic kernels.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Data Availability

No data sets are used in this paper.

References

  • A. Antoniadis, Analysis of variance on function spaces, Series Statistics 15 (1) (1984) 59–71.

    Article  MathSciNet  Google Scholar 

  • N. Aronszajn, Theory of reproducing kernels, Transactions of the American Mathematical Society 68 (1950) 337–404.

    Article  MathSciNet  Google Scholar 

  • J. Barr, H. Rabitz, A generalized kernel method for global sensitivity analysis, SIAM/ASA Journal on Uncertainty Quantification 10 (1) (2022) 27–54.

    Article  MathSciNet  Google Scholar 

  • A. Berlinet, C. Thomas, T. A. Gnan, Reproducing Kernel Hilbert Space in probability and statistics, Kluwer Academic, 2004.

  • K. M. Borgwardt, A. Gretton, M. J. Rasch, H.-P. Kriegel, B. Schölkopf, A. J. Smola, Integrating structured biological data by Kernel Maximum Mean Discrepancy, Bioinformatics 22 (14) (2006) 49–57.

    Article  Google Scholar 

  • S. Boyd, L. Vandenberghe, Convex optimization, Cambridge university press, 2004.

  • S. Chatterjee, A new coefficient of correlation, Journal of the American Statistical Association (2020) 1–21.

  • S. Conti, A. O’Hagan, Bayesian emulation of complex multi-output and dynamic computer models, Journal of Statistical Planning and Inference 140 (3) (2010) 640 – 651.

    Article  MathSciNet  Google Scholar 

  • D. Conn, G. Li, An oracle property of the Nadaraya -Watson kernel estimator for high-dimensional nonparametric regression, Scandinavian Journal of Statistics 46 (3) (2019) 735–764.

    Article  MathSciNet  Google Scholar 

  • S. Da Veiga, Kernel-based anova decomposition and shapley effects–application to global sensitivity analysis, arXiv preprint arXiv:2101.05487 (2021) –.

  • E. de Rocquigny, N. Devictor, S. Tarantola (Eds.), Uncertainty in industrial practice, Wiley, 2008.

  • B. Efron, C. Stein, The jacknife estimate of variance, The Annals of Statistics 9 (1981) 586–596.

    Article  MathSciNet  Google Scholar 

  • Y. Escoufier, Le traitement des variables vectorielles, Biometrics 29 (1973) 751–760.

    Article  MathSciNet  Google Scholar 

  • K. Fukumizu, A. Gretton, B. Schölkopf, B. K. Sriperumbudur, Characteristic kernels on groups and semigroups, in: D. Koller, D. Schuurmans, Y. Bengio, L. Bottou (Eds.), Advances in Neural Information Processing Systems, Vol. 21, Curran Associates, Inc., 2009.

  • K. Fukumizu, F. Bach, M. Jordan, Kernel dimensionality reduction for supervised learning, in: S. Thrun, L. Saul, B. Schölkopf (Eds.), Advances in Neural Information Processing Systems, Vol. 16, MIT Press, 2004.

  • K. Fukumizu, A. Gretton, B. Schölkopf, B. K. Sriperumbudur, Characteristic kernels on groups and semigroups, in: D. Koller, D. Schuurmans, Y. Bengio, L. Bottou (Eds.), Advances in Neural Information Processing Systems, Vol. 21, Curran Associates, Inc., 2009.

  • A. Feuerverger, A consistent test for bivariate dependence, International Statistical Review / Revue Internationale de Statistique 61 (3) (1993) 419–433.

    MathSciNet  Google Scholar 

  • K. Fukumizu, A. Gretton, X. Sun, B. Schölkopf, Kernel measures of conditional dependence, in: Proceedings of the 20th International Conference on Neural Information Processing Systems, NIPS’07, Curran Associates Inc., Red Hook, NY, USA, 2008, pp. 489–496.

  • F. Gamboa, A. Janon, T. Klein, A. Lagnoux, Sensitivity analysis for multidimensional and functional outputs, Electron. J. Statist. 8 (1) (2014) 575–603.

    Article  MathSciNet  Google Scholar 

  • F. Gamboa, A. Janon, T. Klein, A. Lagnoux, Sensitivity indices for multivariate outputs, Comptes Rendus Mathematique 351 (7) (2013) 307–310.

    Article  MathSciNet  Google Scholar 

  • A. Gelman, Analysis of variance-why it is more important than ever, The Annals of Statistics 33 (1) (2005) 1 – 53.

    Article  MathSciNet  Google Scholar 

  • A. Gretton, R. Herbrich, A. Smola, O. Bousquet, B. Schölkopf, Kernel methods for measuring independence, Journal of Machine Learning Research 6 (2005) 2075–2129.

    MathSciNet  Google Scholar 

  • A. Gretton, O. Bousquet, A. Smola, B. Schölkopf, Measuring statistical dependence with hilbert-schmidt norms, in: International conference on algorithmic learning theory, Springer, 2005, pp. 63–77.

  • A. Gretton, K. Borgwardt, M. Rasch, B. Schölkopf, A. Smola, A kernel method for the two-sample-problem, in: B. Schölkopf, J. Platt, T. Hoffman (Eds.), Advances in Neural Information Processing Systems, Vol. 19, MIT Press, 2007.

  • A. Gretton, K. M. Borgwardt, M. J. Rasch, B. Schölkopf, A. Smola, A kernel two-sample test, J. Mach. Learn. Res. 13 (2012) 723–773.

    MathSciNet  Google Scholar 

  • W. Hoeffding, A class of statistics with asymptotically normal distribution, Annals of Mathematical Statistics 19 (1948) 293–325.

    Article  MathSciNet  Google Scholar 

  • H. Hotelling, Relations between two sets of variates, Vol. 28, 1936, pp. 321–377.

    Google Scholar 

  • J. Josse, S. Holmes, Measuring multivariate association and beyond, Statistics Surveys 10 (none) (2016) 132–167.

  • I. Kojadinovic, M. Holmes, Tests of independence among continuous random vectors based on Cramr-von Mises functionals of the empirical copula process, Journal of Multivariate Analysis 100 (6) (2009) 1137–1154.

    Article  MathSciNet  Google Scholar 

  • M. Lamboni, B. Iooss, A.-L. Popelin, F. Gamboa, Derivative-based global sensitivity measures: General links with Sobol’ indices and numerical tests, Mathematics and Computers in Simulation 87 (0) (2013) 45 – 54.

    Article  MathSciNet  Google Scholar 

  • M. Lamboni, Global sensitivity analysis: an efficient numerical method for approximating the total sensitivity index, International Journal for Uncertainty Quantification 6 (1) (2016) 1–17.

    Article  MathSciNet  Google Scholar 

  • M. Lamboni, Derivative-based generalized sensitivity indices and Sobol’ indices, Mathematics and Computers in Simulation 170 (2020) 236 – 256.

    Article  MathSciNet  Google Scholar 

  • M. Lamboni, Weak derivative-based expansion of functions: Anova and some inequalities, Mathematics and Computers in Simulation 194 (2022) 691–718.

    Article  MathSciNet  Google Scholar 

  • M. Lamboni, On dependent generalized sensitivity indices and asymptotic distributions, arXiv preprint arXiv2104.12938 (2021).

  • M. Lamboni, Efficient dependency models: simulating dependent random variables, Mathematics and Computers in Simulation 200 (2022) 199–217.

  • M. Lamboni, Derivative-based integral equalities and inequality: A proxy-measure for sensitivity analysis, Mathematics and Computers in Simulation 179 (2021) 137 – 161.

    Article  MathSciNet  Google Scholar 

  • M. Lamboni, S. Kucherenko, Multivariate sensitivity analysis and derivative-based global sensitivity measures with dependent variables, Reliability Engineering & System Safety 212 (2021) 107519.

    Article  Google Scholar 

  • M. Lamboni, H. Monod, D. Makowski, Multivariate sensitivity analysis to measure global contribution of input factors in dynamic models, Reliability Engineering and System Safety 96 (2011) 450–459.

    Article  Google Scholar 

  • M. Lamboni, Uncertainty quantification: a minimum variance unbiased (joint) estimator of the non-normalized Sobol’ indices, Statistical Papers 61 (2020) 1939–1970.

  • J. E. Oakley, A. O’Hagan, Probabilistic sensitivity analysis of complex models: a bayesian approach, Journal of the Royal Statistical Society: Series B (Statistical Methodology) 66 (3) (2004) 751–769.

    Article  MathSciNet  Google Scholar 

  • A. B. Owen, J. Dick, S. Chen, Higher order Sobol’ indices, Information and Inference: A Journal of the IMA 3 (1) (2014) 59–81.

    Article  MathSciNet  Google Scholar 

  • K. Pearson, On lines and planes of closest fit to systems of points in space, Philosophical Magazine 2 (1901) 559–572.

    Google Scholar 

  • E. Plischke, E. Borgonovo, Fighting the curse of sparsity: Probabilistic sensitivity measures from cumulative distribution functions, Risk Analysis 40 (12) (2020) 2639–2660.

    Article  Google Scholar 

  • M. L. Rizzo, G. J. Székely, Energy distance, WIREs Computational Statistics 8 (1) (2016) 27–38.

    MathSciNet  Google Scholar 

  • A. Renyi, On measures of dependence, Acta Mathematica Academiae Scientiarum Hungarica 10 (3-4) (1959) 441–451.

    Article  MathSciNet  Google Scholar 

  • D. Sejdinovic, B. Sriperumbudur, A. Gretton, K. Fukumizu, Equivalence of distance-based and RKHS-based statistics in hypothesis testing, The Annals of Statistics 41 (5) (2013) 2263–2291.

    Article  MathSciNet  Google Scholar 

  • B. Schölkopf, A. J. Smola, Learning with Kernels, MIT Press, Cambridge, MA, 2002.

    Google Scholar 

  • A. V. Skorohod, On a representation of random variables, Theory Probab. Appl 21 (3) (1976) 645–648.

    MathSciNet  Google Scholar 

  • L. Song, A. Smola, A. Gretton, J. Bedo, K. Borgwardt, Feature selection via dependence maximization, Journal of Machine Learning Research 13 (5) (2012).

  • A. Smola, A. Gretton, L. Song, B. Schölkopf, A hilbert space embedding for distributions, in: International Conference on Algorithmic Learning Theory, Springer, 2007, pp. 13–31.

  • G. J. Székely, M. L. Rizzo, N. K. Bakirov, Measuring and testing dependence by correlation of distances, The Annals of Statistics 35 (6) (2007) 2769–2794.

    Article  MathSciNet  Google Scholar 

  • B. K. Sriperumbudur, A. Gretton, K. Fukumizu, B. Schölkopf, G. R. Lanckriet, Hilbert space embeddings and metrics on probability measures, The Journal of Machine Learning Research 11 (2010) 1517–1561.

    MathSciNet  Google Scholar 

  • I. M. Sobol, Sensitivity analysis for non-linear mathematical models, Mathematical Modelling and Computational Experiments 1 (1993) 407–414.

    Google Scholar 

  • A. Saltelli, K. Chan, E. Scott, Variance-Based Methods, Probability and Statistics, John Wiley and Sons, 2000.

    Google Scholar 

  • S. D. Veiga, Global sensitivity analysis with dependence measures, Journal of Statistical Computation and Simulation 85 (7) (2015) 1283–1305.

    Article  MathSciNet  Google Scholar 

  • S. Xiao, Z. Lu, P. Wang, Multivariate global sensitivity analysis for dynamic models based on energy distance, Structural and Multidisciplinary Optimization 57 (1) (2018) 279–291.

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

We would like to thank the referees for their comments and suggestions that have helped improving this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Matieyendou Lamboni.

Ethics declarations

Conflict of interest

The author has no conflicts of interest to declare regarding the content of this paper.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A Proof of Lemma 1

We can see that the output \(f(\textbf{X})\) is independent of \( \textbf{X}_u\) implies

$$ f(\textbf{X}) {\mathop {=}\limits ^{d}} f(\textbf{X}_u, r_u(\textbf{X}_u, \textbf{Z})) =: g(\textbf{X}_u, \textbf{Z}) = g(\textbf{Z})\, . $$

Therefore, we have \(g^{tot}_u(\textbf{X}_u, \textbf{Z}) =g(\textbf{X}_{u}, \textbf{Z}) = \mathbb {E}_{\textbf{X}_{u}} \left[ g(\textbf{X}_{u}, \textbf{Z}) \right] =\textbf{0} \, a.s.\).

Conversely, if \(g^{tot}_u(\textbf{X}_{u}, \textbf{Z}) =\textbf{0}\), the properties of conditional expectation show that there exists a function h such that

$$ g(\textbf{X}_{u}, \textbf{Z}) = \mathbb {E}_{\textbf{X}_{u}} \left[ g(\textbf{X}_{u}, \textbf{Z}) \right] = h(\textbf{Z})\, , $$

which means that \(g(\textbf{X}_{u}, \textbf{Z})\) is a function of \(\textbf{Z}\) only, and the result holds.

Appendix B Proof of Lemma 2

Bearing in mind Definition 4, we want to show that

$$\begin{aligned} \mathbb {E}\left[ k\left( g^{tot}_u (\textbf{X}_{u}, \textbf{Z}), \, g^{tot}_u (\textbf{X}_{u}', \textbf{Z}') \right) \right] -k(\textbf{0}, \textbf{0}) = 0 \, , \Longrightarrow \, g^{tot}_u (\textbf{X}_{u}, \textbf{Z})=\textbf{0} \,. \nonumber \end{aligned}$$

First, let us start with the kernel k. Using the theorem of transfer, we can write

$$ \mathbb {E}\left[ k\left( g^{tot}_u (\textbf{X}_{u}, \textbf{Z}), \, g^{tot}_u (\textbf{X}_{u}', \textbf{Z}') \right) \right] -k(\textbf{0}, \textbf{0}) = \int _{\mathcal {X}^2} k\left( w, w' \right) \, d(F_{T_u}\otimes F_{T_u} -H \otimes H)(w,w') \, , $$

with \(F_{T_u}\) the CDF of \(g^{tot}_u (\textbf{X}_{u}, \textbf{Z})\) and H the CDF of \(\delta _{\textbf{0}}\). For a SPD kernel k, when the above identity is zero, it implies that \(F_{T_u} \otimes F_{T_u} = H \otimes H\); \(F_{T_u} =H\) and \(g^{tot}_u (\textbf{X}_{u}, \textbf{Z})=\textbf{0} \, a.s.\).

Second, we are going to use the criterion \(\mathbb {E}_{\textbf{G} \sim F, \textbf{G}'\sim F}\left[ \bar{k}(\textbf{G}, \textbf{G}')\right] =0 \Rightarrow F= H\). Since \(\bar{k}\) is centered at \(\textbf{0}\), we have

$$ \mathbb {E}\left[ \bar{k}\left( g^{tot}_u (\textbf{X}_{u}, \textbf{Z}), \, g^{tot}_u (\textbf{X}_{u}', \textbf{Z}') \right) \right] -\bar{k}(\textbf{0}, \textbf{0}) = \mathbb {E}_{\textbf{G}^{tot}\sim F_{T_u}, \textbf{G}^{tot '} \sim F_{T_u}}\left[ \bar{k}\left( \textbf{G}^{tot}, \, \textbf{G}^{tot '} \right) \right] =0 \, , $$

which implies that \(F_{T_u} =H\).

Appendix C Proof of Lemma 3

For Point (i), we are going to use the first criterion of \(\mathcal {K}_E\) (see Equation (6)). We can write for all \(F \in \mathcal {F}\)

$$ \mathbb {E}_{\textbf{G} \sim F, \textbf{G}' \sim F}\left[ \bar{k}\left( \textbf{G}, \textbf{G}' \right) \right] \!=\! \left| \left| \mathbb {E}_{\textbf{G} \sim F}\left[ \bar{k}(\cdot , \textbf{G}) \right] \right| \right| _\mathcal {H}^2 =0 \Longleftrightarrow \mathbb {E}_{\textbf{G} \sim F}\left[ \bar{k}(\cdot , \textbf{G}) \right] = 0\, . $$

Note that \(\mathbb {E}_{\textbf{G} \sim F}\left[ \bar{k}(\cdot , \textbf{G}) \right] = 0 = \mathbb {E}_{W \sim H}\left[ \bar{k}(\cdot , W) \right] \) with H the CDF of \(\delta _{\{\textbf{0}\}}\), and \(\bar{k}\) is a characteristic kernel when k is a characteristic one. Thus, Point (i) holds because \(\mathbb {E}_{\textbf{G} \sim F}\left[ \bar{k}(\cdot , \textbf{G}) \right] = \mathbb {E}_{W\sim H}\left[ \bar{k}(\cdot , W) \right] \) implies \(F =H\).

For Point (ii), we are going to use the second criterion of \(\mathcal {K}_E\). According to the Bochner Lemma and the Fubini theorem, we can write for independent vectors YZ

$$\begin{aligned} \mathbb {E}_{Y\sim \mu , Z \sim \mu }\left[ k(Y, Z)\right]= & {} \int _{\mathcal {X}^2} k(\textbf{y}, \textbf{z}) \, d\mu \otimes \mu (\textbf{y}, \textbf{z}) \nonumber \\= & {} \int _{\mathcal {X}^2} \int _{\mathbb {R}^n} e^{-i \textbf{y}^T\textbf{w}} e^{i \textbf{z}^T\textbf{w}} \, d\mu (\textbf{y}) d\mu (\textbf{z}) d\Lambda (\textbf{w}) \nonumber \\= & {} \int _{\mathbb {R}^n} \left| \int _{\mathcal {X}} e^{-i \textbf{y}^T\textbf{w}} \, d\mu (\textbf{y}) \right| ^2 d\Lambda (\textbf{w})\nonumber \, . \end{aligned}$$

Thus, \(\mathbb {E}_{Y\sim \mu , Z \sim \mu }\left[ k(Y, Z)\right] =0\) implies that \(\int e^{-i \textbf{y}^T\textbf{w}} \, d\mu (\textbf{y})=0\) for all \(\textbf{w} \in Supp(\Lambda )= \mathbb {R}^n\). For a class of finite and signed Borel measures of the form \(\mu (A) := \int _A h(\textbf{y}) \, d\textbf{y} \) with \(h: \mathbb {R}^n\rightarrow \mathbb {R}\) a measurable function such as a difference of two probability densities, the function

$$ \widehat{h}(\textbf{w}) := \int e^{-i \textbf{y}^T\textbf{w}} \, d\mu (\textbf{y}) = \int e^{-i \textbf{y}^T\textbf{w}} h(\textbf{y}) \,d\textbf{y}, \quad \forall \; \textbf{w} \in Supp(\Lambda ) =\mathbb {R}^n\, , $$

is the Fourier transform of \(h(\textbf{y})\). As \(\widehat{h}(\textbf{w})=0\) for all \(\textbf{w} \in \mathbb {R}^n\), we then have \(h(\textbf{y}) =0\) bearing in mind the inverse Fourier transform. Point (ii) holds because \(\mu =0\).

Appendix D Proof of Lemma 4

Let \(\mathcal {H}\) denote an Hilbert space induced by k. Without loss of generality, we are going to show the results for \(q=1\).

First, using the convexity of \(J(f) :=\left| \left| f \right| \right| _\mathcal {H}^2\) with \(f \in \mathcal {H}\), we know that there exist a gradient of \(\left| \left| f \right| \right| _\mathcal {H}^2\) (i.e., \(\nabla J(f) := 2f\)) such that for all \(f_0 \in \mathcal {H}\) (Boyd and Vandenberghe, 2004)

$$ \left| \left| f \right| \right| _\mathcal {H}^2 - \left| \left| f_0 \right| \right| _\mathcal {H}^2 \ge \langle 2 f_0,\, f-f_0 \rangle _{\mathcal {H}}\, . $$

Second, for \(\textbf{G}^{tot}_u {\mathop {=}\limits ^{}} g^{tot}_u (\textbf{X}_{u}, \textbf{Z})\) and \(\textbf{G}^{fo}_u {\mathop {=}\limits ^{}} g^{fo}_u (\textbf{X}_{u})\), we have (see Equation (4))

$$ \mathbb {E}_{\textbf{Z}}\left[ g^{tot}_u (\textbf{X}_{u}, \textbf{Z}) \right] = g^{fo}_u (\textbf{X}_{u}) \, . $$

For Point (i), knowing that for the centered kernel \(\bar{k}\),

$$ \mathcal {D}_k(F_{T_u}) = \mathbb {E}\left[ \bar{k}\left( \textbf{G}^{tot}_u, \, \textbf{G}^{tot '}_u \right) \right] = \left| \left| \mathbb {E}\left[ k\left( \cdot , \textbf{G}^{tot}_u \right) \right] - k\left( \cdot , \textbf{0}\right) \right| \right| _\mathcal {H}^2 \, , $$

we can write (bearing in mind that \(k(\textbf{0}, \textbf{y}') =k(\textbf{y}, \textbf{0}) =c\))

$$\begin{aligned}{} & {} \mathcal {D}_k(F_{T_u})- \mathcal {D}_k(F_{u}) = \mathbb {E}\left[ \bar{k}\left( \textbf{G}^{tot}_u, \, \textbf{G}^{tot '}_u \right) \right] - \mathbb {E}\left[ \bar{k}\left( \textbf{G}^{fo}_u , \, \textbf{G}^{fo '}_u \right) \right] \nonumber \\\ge & {} 2\left<\mathbb {E}\left[ k\left( \cdot , \textbf{G}^{fo}_u \right) \right] - k\left( \cdot , \textbf{0}\right) , \; \mathbb {E}\left[ k\left( \cdot , \textbf{G}^{tot}_u \right) \right] - \mathbb {E}\left[ k\left( \cdot , \textbf{G}^{fo}_u\right) \right] \right>_{\mathcal {H}} \nonumber \\= & {} 2 \mathbb {E}\left[ k\left( \textbf{G}^{tot}_u, \, \textbf{G}^{fo '}_u\right) \right] - 2 \mathbb {E}\left[ k\left( \textbf{G}^{fo}_u, \, \textbf{G}^{fo '}_u \right) \right] -2 \mathbb {E}\left[ k\left( \textbf{G}^{tot}_u, \, \textbf{0}\right) \right] + 2 \mathbb {E}\left[ k\left( \textbf{G}^{fo}_u, \, \textbf{0} \right) \right] \nonumber \\= & {} 2 \mathbb {E}\left[ k\left( \textbf{G}^{tot}_u, \, \textbf{G}^{fo '}_u\right) \right] - 2 \mathbb {E}\left[ k\left( \textbf{G}^{fo}_u, \, \textbf{G}^{fo '}_u \right) \right] \nonumber \, . \end{aligned}$$

Point (i) holds using the Jensen inequality and Equation (4).

For (ii), since \(\mathbb {E}[\textbf{G}^{tot}_u] =\mathbb {E}[\textbf{G}^{fo}_u]=\textbf{0}\) and k is convex, we can write \(\mathcal {D}_k(F_{T_u})\) without the absolute symbol thanks to Jensen’s theorem, that is,

$$ \mathcal {D}_k(F_{T_u}) = \mathbb {E}\left[ k\left( \textbf{G}^{tot}_u, \, \textbf{G}^{tot '}_u \right) \right] -k(\textbf{0}, \textbf{0}) = \left| \left| \mathbb {E}\left[ k\left( \cdot , \textbf{G}^{tot}_u \right) \right] \right| \right| _\mathcal {H}^2 -k(\textbf{0}, \textbf{0}) \, . $$

Using the convexity of \(\left| \left| \cdot \right| \right| _\mathcal {H}^2\), we can write

$$\begin{aligned}{} & {} \mathcal {D}_k(F_{T_u})- \mathcal {D}_k(F_{u}) = \left| \left| \mathbb {E}\left[ k\left( \cdot , \textbf{G}^{tot}_u \right) \right] \right| \right| _\mathcal {H}^2 - \left| \left| \mathbb {E}\left[ k\left( \cdot , \textbf{G}^{fo}_u \right) \right] \right| \right| _\mathcal {H}^2 \nonumber \\\ge & {} 2\left<\mathbb {E}\left[ k\left( \cdot , \textbf{G}^{fo}_u\right) \right] , \; \mathbb {E}\left[ k\left( \cdot , \textbf{G}^{tot}_u \right) \right] - \mathbb {E}\left[ k\left( \cdot , \textbf{G}^{fo}_u \right) \right] \right>_{\mathcal {H}} \nonumber \\= & {} 2 \mathbb {E}\left[ k\left( \textbf{G}^{tot}_u, \, \textbf{G}^{fo '}_u \right) \right] - 2 \mathbb {E}\left[ k\left( \textbf{G}^{fo}_u, \, \textbf{G}^{fo '}_u\right) \right] \nonumber \, . \end{aligned}$$

Thus, Point (ii) holds using the Jensen inequality and Equation (4).

For Point (iii), as \(k(\textbf{0}, \textbf{0}) >0\) and k is concave, we have

$$ \mathcal {D}_k(F_{T_u}) = - \mathbb {E}\left[ k\left( \textbf{G}^{tot}_u, \, \textbf{G}^{tot '}_u \right) \right] + k(\textbf{0}, \textbf{0}) = -\left| \left| \mathbb {E}\left[ k\left( \cdot , \textbf{G}^{tot}_u \right) \right] \right| \right| _\mathcal {H}^2 + k(\textbf{0}, \textbf{0}) \, , $$

and we can write

$$\begin{aligned}{} & {} \mathcal {D}_k(F_{T_u})- \mathcal {D}_k(F_{u}) = \left| \left| \mathbb {E}\left[ k\left( \cdot , \textbf{G}^{fo}_u \right) \right] \right| \right| _\mathcal {H}^2 - \left| \left| \mathbb {E}\left[ k\left( \cdot , \textbf{G}^{tot}_u \right) \right] \right| \right| _\mathcal {H}^2 \nonumber \\\ge & {} 2\left<\mathbb {E}\left[ k\left( \cdot , \textbf{G}^{tot}_u\right) \right] , \; \mathbb {E}\left[ k\left( \cdot , \textbf{G}^{fo}_u \right) \right] - \mathbb {E}\left[ k\left( \cdot , \textbf{G}^{tot}_u \right) \right] \right>_{\mathcal {H}} \nonumber \\= & {} 2 \mathbb {E}\left[ k\left( \textbf{G}^{fo}_u, \, \textbf{G}^{tot '}_u \right) \right] - 2 \mathbb {E}\left[ k\left( \textbf{G}^{tot}_u, \, \textbf{G}^{tot '}_u\right) \right] \nonumber \, . \end{aligned}$$

Using (4), Point (iii) holds by applying the Jensen inequality to \(-k\), which is convex.

Appendix E Proof of Theorem 1

Without loss of generality, we suppose that the outputs \(\textbf{Y} := g(\textbf{X}_{w}, \textbf{Z}_{\sim w})\) is centered, that is, \(\mathbb {E}\left[ \textbf{Y} \right] =\textbf{0}\). Recall that AFs are also centered. Using \(w \subseteq u\), we can write \(u=w \cup w_0\) with \(w_0 \subseteq u\) and \(w \cap w_0 =\emptyset \). Thus, \(\textbf{Y} := g(\textbf{X}_{w}, \textbf{Z}_{w_0}, \textbf{Z}_{\sim u})\). First, as \( g^{fo}_w (\textbf{X}_{w}) =\mathbb {E}_{\textbf{Z}_{w_0} \textbf{Z}_{\sim u}}\left[ g(\textbf{X}_{w}, \textbf{Z}_{w_0}, \textbf{Z}_{\sim u})\right] \), and it is known that (see Lamboni, 2021a; Lemma 3)

\( g^{fo}_u (\textbf{X}_{u}) {\mathop {=}\limits ^{d}} g^{fo}_u (\textbf{X}_{w}, \textbf{Z}_{w_0}) = \mathbb {E}_{\textbf{Z}_{\sim u}}\left[ g(\textbf{X}_{w}, \textbf{Z}_{w_0}, \textbf{Z}_{\sim u})\right] \), we can see that

$$ g^{fo}_w (\textbf{X}_{w}) {\mathop {=}\limits ^{d}} \mathbb {E}_{\textbf{Z}_{w_0}}\left[ g^{fo}_u (\textbf{X}_{w}, \textbf{Z}_{w_0}) \right] \, . $$

Second, for the convex kernel k, the Jensen inequality allows for writting \(\mathcal {D}_k(F_{w})\) as

$$ \mathcal {D}_k(F_{w}) = \mathbb {E}\left[ k\left( g^{fo}_w (\textbf{X}_{w}), \, g^{fo}_w (\textbf{X}_{w}') \right) \right] -k(\textbf{0}, \textbf{0}) \, . $$

Thus, the first result holds by applying the Jensen inequality, that is,

$$ \mathbb {E}\left[ k\left( g^{fo}_w (\textbf{X}_{w}), \, g^{fo}_w (\textbf{X}_{w}') \right) \right] \le \mathbb {E}\left[ k\left( g^{fo}_u (\textbf{X}_{u}), \, g^{fo}_u (\textbf{X}_{u}') \right) \right] \, . $$

For the second result, it comes out from the above equivalent in distribution that

$$ g^{tot}_w (\textbf{X}_{w}, \textbf{Z}_{w_0}, \textbf{Z}_{\sim u}) = g(\textbf{X}_{w}, \textbf{Z}_{w_0}, \textbf{Z}_{\sim u}) - \mathbb {E}_{\textbf{X}_{w}} \left[ g(\textbf{X}_{w}, \textbf{Z}_{w_0}, \textbf{Z}_{\sim u}) \right] \, , $$
$$ g^{tot}_u (\textbf{X}_{w}, \textbf{Z}_{w_0}, \textbf{Z}_{\sim u}) = g(\textbf{X}_{w}, \textbf{Z}_{w_0}, \textbf{Z}_{\sim u}) - \mathbb {E}_{\textbf{X}_{w}, \textbf{Z}_{w_0}} \left[ g(\textbf{X}_{w}, \textbf{Z}_{w_0}, \textbf{Z}_{\sim u}) \right] \, , $$

and we want to show that

$$ \mathbb {E}\left[ k\left( g^{tot}_w (\textbf{X}_{w}, \textbf{Z}_{\sim w}), \, g^{tot}_w (\textbf{X}_{w}', \textbf{Z}'_{\sim w}) \right) \right] \le \mathbb {E}\left[ k\left( g^{tot}_u (\textbf{X}_{u}, \textbf{Z}_{\sim u}), \, g^{tot}_u (\textbf{X}_{u}', \textbf{Z}'_{\sim u}) \right) \right] \, . $$

To that end, let \(\textbf{V}' :=(\textbf{X}_{w}', \, \textbf{Z}_{w_0}', \, \textbf{Z}_{\sim u}')\) be an i.i.d. copy of \(\textbf{V} := (\textbf{X}_{w}, \textbf{Z}_{w_0}, \textbf{Z}_{\sim u})\); and consider the function \( h(\textbf{V}, \textbf{V}') := g(\textbf{X}_{w}, \textbf{Z}_{w_0}, \textbf{Z}_{\sim u}) - g(\textbf{X}_{w}', \textbf{Z}_{w_0}', \textbf{Z}_{\sim u}') \). Since the three components of \(\textbf{V}\) (resp. \(\textbf{V}'\)) are independent, we can write

$$ \mathbb {E}\left[ h(\textbf{V}, \textbf{V}') \, | \textbf{X}_{w},\, \delta _{\textbf{0}}(\textbf{Z}_{w_0}' -\textbf{Z}_{w_0}),\, \delta _{\textbf{0}}(\textbf{Z}_{\sim u}'-\textbf{Z}_{\sim u}) \right] = g^{tot}_w (\textbf{X}_{w}, \textbf{Z}_{w_0}, \textbf{Z}_{\sim u}) \, , $$
$$ \mathbb {E}\left[ h(\textbf{V}, \textbf{V}') \, | \textbf{X}_{w}, \textbf{Z}_{w_0}, \delta _{\textbf{0}}(\textbf{Z}_{\sim u}'-\textbf{Z}_{\sim u}) \right] = g^{tot}_u (\textbf{X}_{w}, \textbf{Z}_{w_0}, \textbf{Z}_{\sim u}) \, . $$

Moreover, the properties of conditional expectation allow for writing

$$\begin{aligned}{} & {} \mathbb {E}\left[ g^{tot}_u (\textbf{X}_{w}, \textbf{Z}_{w_0}, \textbf{Z}_{\sim u})\, |\textbf{X}_{w},\, \delta _{\textbf{0}}(\textbf{Z}_{w_0}' -\textbf{Z}_{w_0}),\, \delta _{\textbf{0}}(\textbf{Z}_{\sim u}'-\textbf{Z}_{\sim u}) \right] \nonumber \\= & {} \mathbb {E}\left[ \mathbb {E}\left[ h(\textbf{V}, \textbf{V}') \, |\textbf{X}_{w}, \textbf{Z}_{w_0}, \delta _{\textbf{0}}(\textbf{Z}_{\sim u}'-\textbf{Z}_{\sim u}) \right] \, | \textbf{X}_{w},\, \delta _{\textbf{0}}(\textbf{Z}_{w_0}' -\textbf{Z}_{w_0}),\, \delta _{\textbf{0}}(\textbf{Z}_{\sim u}'-\textbf{Z}_{\sim u}) \right] \nonumber \\= & {} \mathbb {E}\left[ h(\textbf{V}, \textbf{V}') \, | \textbf{X}_{w},\, \delta _{\textbf{0}}(\textbf{Z}_{w_0}' -\textbf{Z}_{w_0}),\, \delta _{\textbf{0}}(\textbf{Z}_{\sim u}'-\textbf{Z}_{\sim u}) \right] = g^{tot}_w (\textbf{X}_{w}, \textbf{Z}_{w_0}, \textbf{Z}_{\sim u}) \, , \nonumber \end{aligned}$$

because the space of projection and the filtration associated with \((\textbf{X}_{w}, \textbf{Z}_{w_0}, \, \delta _{\textbf{0}}(\textbf{Z}_{\sim u}'-\textbf{Z}_{\sim u}) )\) contain those of \((\textbf{X}_{w},\, \delta _{\textbf{0}}(\textbf{Z}_{w_0}' -\textbf{Z}_{w_0}),\, \delta _{\textbf{0}}(\textbf{Z}_{\sim u}'-\textbf{Z}_{\sim u}))\). The second result holds by applying the conditional Jensen inequality, as k is convex.

Finally, the results for a concave kernel k can be deduced from the above results. Indeed, we can see that \(-k\) is convex and \(\mathcal {D}_k(F_{w})\) becomes

$$ \mathcal {D}_k(F_{w}) = \mathbb {E}\left[ -k\left( g^{fo}_w (\textbf{X}_{w}), \, g^{fo}_w (\textbf{X}_{w}') \right) \right] + k(\textbf{0}, \textbf{0}) \, . $$

Appendix F Proof of Corollary 1

It is sufficient to show the results for \(q=1\).

For Point (i), according to Theorem 1, we can write

$$ 0\le \mathcal {D}_k(F_{w}) \le \mathcal {D}_k(F_{T_w}) \le \mathcal {D}_k(F_{T_u}), \quad \forall \, w \subseteq u \subseteq \{1, \ldots , d\} \, . $$

Thus, we have \(0\le S_k (F) \le 1\) because \(F_\bullet =F_{T_{\{1, \ldots , d\}}}\).

Point (ii) is obvious because \(k\in \mathcal {K}_E\), the set of kernels that guarantee the independence criterion.

The if part of Point (iii) is obvious. For the only if part, the equality \(\mathcal {D}_k(F_{T_u}) = \mathcal {D}_k(F_{\bullet })\) implies that

$$ \int k(\textbf{y}, \textbf{y}') \, d(F_{T_u} \otimes F_{T_u} -F_{\bullet } \otimes F_{\bullet })(\textbf{y}, \textbf{y}') = 0\, , $$

which also implies that \(F_{T_u} = F_{\bullet }\) for the second kind of kernels of \(\mathcal {K}_E\).

Point (iv) holds for IMKs by definition.

Appendix G Proof of Theorem 2

Firstly, we have \(\widehat{\mu }(\textbf{Z}_i)- \mathbb {E}_{\textbf{X}_u}\left[ g(\textbf{X}_u , \textbf{Z}_i) \right] \rightarrow 0\) when \(m_1 \rightarrow \infty \).

Knowing that \( \textbf{G}_{i,u}^{tot} = g(\textbf{X}_{i,u} , \textbf{Z}_i) - \mathbb {E}_{\textbf{X}_u} \left[ g(\textbf{X}_{u} , \textbf{Z}_i) \right] \) and \( \textbf{G}_{i,u}^{tot\, '} = g(\textbf{X}_{i,u}' , \textbf{Z}_i') - \mathbb {E}_{\textbf{X}_u'} \left[ g(\textbf{X}_{u}' , \textbf{Z}_i') \right] \), the Taylor expansion of k about \(\left( \textbf{G}_{i,u}^{tot} ,\, \textbf{G}_{i,u}^{tot\, '}\right) \) yields

$$\begin{aligned}{} & {} k\left( g(\textbf{X}_{i,u} , \textbf{Z}_i) - \widehat{\mu }(\textbf{Z}_i),\, g(\textbf{X}_{i,u}' , \textbf{Z}_i') - \widehat{\mu }(\textbf{Z}_i') \right) = k\left( \textbf{G}_{i,u}^{tot} ,\, \textbf{G}_{i,u}^{tot\, '}\right) \nonumber \\{} & {} + \nabla ^Tk \left( \textbf{G}_{i,u}^{tot} ,\, \textbf{G}_{i,u}^{tot\, '}\right) \left[ \begin{array}{c} \mathbb {E}_{\textbf{X}_u}\left[ g(\textbf{X}_u , \textbf{Z}_i) \right] - \widehat{\mu }(\textbf{Z}_i) \\ \mathbb {E}_{\textbf{X}_u'}\left[ g(\textbf{X}_u' , \textbf{Z}_i') \right] - \widehat{\mu }(\textbf{Z}_i') \end{array}\right] + R_{m_1} \nonumber \, , \end{aligned}$$

where \(R_{m_1} \xrightarrow {P} 0\) when \(m_1 \rightarrow ~\infty \). Therefore, we can write

$$\begin{aligned}{} & {} \widehat{\mu _k^{tot}} = \frac{1}{m} \sum _{i=1}^m k\left( \textbf{G}_{i,u}^{tot} ,\, \textbf{G}_{i,u}^{tot\, '}\right) \nonumber \\{} & {} + \frac{1}{m} \sum _{i=1}^m \nabla ^Tk \left( \textbf{G}_{i,u}^{tot} ,\, \textbf{G}_{i,u}^{tot\, '}\right) \left[ \begin{array}{c} \mathbb {E}_{\textbf{X}_u}\left[ g(\textbf{X}_u , \textbf{Z}_i) \right] - \widehat{\mu }(\textbf{Z}_i) \\ \mathbb {E}_{\textbf{X}_u'}\left[ g(\textbf{X}_u' , \textbf{Z}_i') \right] - \widehat{\mu }(\textbf{Z}_i') \end{array}\right] + R_{m,m_1} \nonumber \, , \end{aligned}$$

where \(R_{m,m_1} \xrightarrow {P} 0\) when \(m_1 \rightarrow ~\infty \). Since the second term of the above equation converge in probability toward 0, the LLN ensures that \(\widehat{\mu _k^{tot}}\) is a consistent estimator of \(\mu _k^{tot}\). thus, the first result of Point (i) holds.

Secondly, we obtain the second result of Point (i) by applying the central limit theorem (CLT) to the first term of the above equation, as the second term converge in probability toward 0.

The proof of Point (ii) is similar to the proof of Point (i). Indeed, using the Taylor expansion of \(k^2\), we obtain the consistency of the second-order moment of k. The Slutsky theorem ensures the consistency of the cross components and \(\left( \widehat{\mu _k^{tot}}\right) ^2\).

Point (iii) is then obvious using Point (ii).

The proofs of Point (iv) is similar to those of Point (i).

Appendix H Proof of Corollary 2

First, the results about the consistency of the estimators are obtained by using Theorem 2 and the Slutsky theorem.

The numerators of Equations (15)-(16) are asymptotically distributed as Gaussian variable according to Theorem 2. To obtain the asymptotic distributions of the sensitivity indices, we first applied the Slutsky theorem, and second, we use the fact that \(\sqrt{m}\left( \widehat{S_{T_u}^k} - \frac{\mathbb {E}\left[ k\left( g^{tot}_u,\, g^{tot\, '}_u \right) \right] -k(\textbf{0}, \textbf{0})}{\frac{1}{M}\sum _{i=1}^M k\left( g(\textbf{X}_{i,u} , \textbf{Z}_i) - \widehat{\mu },\, g(\textbf{X}_{i,u}' , \textbf{Z}_i') - \widehat{\mu } \right) -k(\textbf{0}, \textbf{0})} \right) \) and \(\sqrt{m}\left( \widehat{S_{T_u}^k} - S_{T_u}^k \right) \) are asymptotically equivalent in probability under the technical condition \(m / M \rightarrow 0\) (see Lamboni, 2018 for more details).

Appendix I Proof of Lemma 5

For Point (i), the convexity of \(\psi \) implies the existence of \(\partial \psi \) such that

$$ -\alpha \psi (\textbf{y}, \textbf{y}') +\alpha \psi (\textbf{b}, \textbf{y}') \le \langle -\alpha \partial \psi (\textbf{b}, \textbf{y}'), \textbf{y}-\textbf{b} \rangle \, , $$

which also implies (thanks to the Taylor expansion) that

$$\begin{aligned} e^{-\alpha \psi (\textbf{y}, \textbf{y}') + \alpha \psi (\textbf{b}, \textbf{y}')} \le e^{\langle -\alpha \partial \psi (\textbf{b}, \textbf{y}'), \textbf{y}-\textbf{b} \rangle } \approx 1 + \langle -\alpha \partial \psi (\textbf{b}, \textbf{y}'), \textbf{y}-\textbf{b} \rangle \, \end{aligned}$$
(20)

under the condition (thanks to Cauchy-Schwartz)

$$ \left| \langle -\alpha \partial \psi (\textbf{b}, \textbf{y}'), \textbf{y}-\textbf{b} \rangle \right| \le \alpha \left| \left| \partial \psi (\textbf{b}, \textbf{y}') \right| \right| _{2} \, \left| \left| \textbf{y}-\textbf{b} \right| \right| _{2} \le \epsilon \qquad \forall \, \textbf{y}', \textbf{b} \in \mathbb {R}^n\, . $$

Equivalently, we can write \( \alpha \le \frac{\epsilon }{\left| \left| \textbf{y}-\textbf{b} \right| \right| _{2} \left| \left| \partial \psi (\textbf{b}, \textbf{z}) \right| \right| _{2}}; \quad \forall \, \, \textbf{y}, \textbf{z}, \textbf{b} \in \mathcal {X} \). Equation (20) implies that k is concave under the above condition. Indeed, we have

$$\begin{aligned} e^{-\alpha \psi (\textbf{y}, \textbf{y}') + \alpha \psi (\textbf{b}, \textbf{y}')} -1\le & {} \langle -\alpha \partial \psi (\textbf{b}, \textbf{y}'), \textbf{y}-\textbf{b} \rangle \nonumber \\ e^{-\alpha \psi (\textbf{y}, \textbf{y}')} - e^{-\alpha \psi (\textbf{b}, \textbf{y}')}\le & {} e^{-\alpha \psi (\textbf{b}, \textbf{y}')} \langle -\alpha \partial \psi (\textbf{b}, \textbf{y}'), \textbf{y}-\textbf{b} \rangle \nonumber \\ - k (\textbf{y}, \textbf{y}') + k(\textbf{b}, \textbf{y}')\ge & {} \langle \alpha \partial \psi (\textbf{b}, \textbf{y}') k(\textbf{b}, \textbf{y}'), \textbf{y}-\textbf{b} \rangle = \langle \partial k(\textbf{b}, \textbf{y}'), \textbf{y}-\textbf{b} \rangle \nonumber \, , \end{aligned}$$

with \(\partial k(\textbf{b}, \textbf{y}') := \alpha \partial \psi (\textbf{b}, \textbf{y}') k(\textbf{b}, \textbf{y}')\) the subgradient of \(-k\). Thus, \(-k\) is convex because k is continuous (Boyd and Vandenberghe, 2004).

For Point (ii), the gradient and the hessian of \(k (\textbf{y}, \textbf{y}')=e^{- \alpha \psi (\textbf{y}, \textbf{y}')}\) w.r.t. \(\textbf{y}\) are

$$ \nabla k (\textbf{y}, \textbf{y}') := -\alpha \nabla \psi (\textbf{y}, \textbf{y}') k(\textbf{y}, \textbf{y}'), \, , $$
$$ H_k (\textbf{y}, \textbf{y}') := \left[ -\alpha H_\psi (\textbf{y}, \textbf{y}') + \alpha ^2 \nabla \psi (\textbf{y}, \textbf{y}') \nabla ^T\psi (\textbf{y}, \textbf{y}') \right] k(\textbf{y}, \textbf{y}') \, . $$

Therefore, if we use \(E := -H_\psi (\textbf{y}, \textbf{y}') + \alpha \nabla \psi (\textbf{y}, \textbf{y}') \nabla ^T\psi (\textbf{y}, \textbf{y}')\), then k is concave when E is negative definite. Thus, for all \(\textbf{b} \in \mathcal {X}\), we can write

$$\begin{aligned} - \textbf{b}^TH_\psi (\textbf{y}, \textbf{y}') \textbf{b} + \alpha \textbf{b}^T\nabla \psi (\textbf{y}, \textbf{y}') \nabla ^T\psi (\textbf{y}, \textbf{y}') \textbf{b}&\le 0&\nonumber \\ \alpha \left( \textbf{b}^T\nabla \psi (\textbf{y}, \textbf{y}')\right) ^2\le & {} \textbf{b}^TH_\psi (\textbf{y}, \textbf{y}') \textbf{b} \nonumber \\ \alpha\le & {} \frac{\textbf{b}^TH_\psi (\textbf{y}, \textbf{y}') \textbf{b} }{\left( \textbf{b}^T\nabla \psi (\textbf{y}, \textbf{y}')\right) ^2} \nonumber \, . \end{aligned}$$

Appendix J Proof of Corollary 4

Namely, we use \(u_1(\textbf{y}, \textbf{y}', \textbf{y}'') := \frac{\epsilon }{\left| \left| \textbf{y}-\textbf{y}' \right| \right| _{2} \left| \left| \partial \psi (\textbf{y}', \textbf{y}'') \right| \right| _{2}}\) and \(u_2(\textbf{y}, \textbf{y}', \textbf{y}'') := \frac{\textbf{y}^{'' \,T} H_\psi (\textbf{y}, \textbf{y}') \textbf{y}''}{\left( \nabla \psi (\textbf{y}, \textbf{y}')^T\, \textbf{y}''\right) ^2}\) for the upper bounds of \(\alpha \) (see proof of Lemma 5). For the sequel of simplicity, we use \(u(\textbf{y}, \textbf{y}', \textbf{y}'')\) with \(\textbf{y}, \textbf{y}', \textbf{y}'' \in \mathcal {X}\) for either \(u_1(\textbf{y}, \textbf{y}', \textbf{y}'')\) or \(u_2(\textbf{y}, \textbf{y}', \textbf{y}'')\).

As \(u(\textbf{Y}, \textbf{Y}', \textbf{Y}'')\) is random variable, we have (Markov’s inequality)

$$ \mathbb {P}\left( u(\textbf{Y}, \textbf{Y}', \textbf{Y}'') < \alpha \right) = \mathbb {P}\left( \frac{1}{u(\textbf{Y}, \textbf{Y}', \textbf{Y}'')} > \frac{1}{\alpha } \right) \le \alpha \mathbb {E}\left[ \frac{1}{u(\textbf{Y}, \textbf{Y}', \textbf{Y}'')} \right] \le \tau \, , $$

which implies that \(\alpha \le \frac{\tau }{\mathbb {E}\left[ \frac{1}{u(\textbf{Y}, \textbf{Y}', \textbf{Y}'')} \right] }\).

Moreover, using Markov’s inequality we can write

$$ 1- \tau \le \mathbb {P}\left( u(\textbf{Y}, \textbf{Y}', \textbf{Y}'') \ge \alpha \right) \le \frac{1}{\alpha } \mathbb {E}\left[ u(\textbf{Y}, \textbf{Y}', \textbf{Y}'') \right] \, , $$

which implies that \(\alpha \le \frac{\mathbb {E}\left[ u(\textbf{Y}, \textbf{Y}', \textbf{Y}'') \right] }{1- \tau }\).

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lamboni, M. Kernel-based Measures of Association Between Inputs and Outputs Using ANOVA. Sankhya A (2024). https://doi.org/10.1007/s13171-024-00354-w

Download citation

  • Received:

  • Published:

  • DOI: https://doi.org/10.1007/s13171-024-00354-w

Keywords

Mathematics Subject Classification

Navigation