Skip to main content
Log in

Uncertainty quantification and reliability analysis by an adaptive sparse Bayesian inference based PCE model

  • Original Article
  • Published:
Engineering with Computers Aims and scope Submit manuscript

Abstract

An adaptive Bayesian polynomial chaos expansion (BPCE) is developed in this paper for uncertainty quantification (UQ) and reliability analysis. The sparsity in the PCE model is developed using automatic relevance determination (ARD) and the PCE coefficients are computed using the variational Bayesian (VB) inference. Further, Sobol sequence is utilized to evaluate a response quantity sequentially. Finally, leave one out (LOO) error is used to obtain the adaptive BPCE model. UQ and reliability analysis are performed of some numerical examples by the adaptive BPCE model. It is found that the optimal number of model evaluations and the optimal PCE degree are suitably selected simultaneously for a problem by the adaptive BPCE model. A highly accurate result is predicted by the proposed approach using very few model evaluation. Further, highly sparse PCE models are obtained by the ARD approach for most of the numerical examples. Additionally, distribution parameters of the predicted response quantity are also obtained by the VB inference, which are used to compute the confidence interval of the predicted response quantities.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17

Similar content being viewed by others

References

  1. Abdollahi A, Azhdary Moghaddam M, Hashemi Monfared SA, Rashki M, Li Y (2020) Subset simulation method including fitness-based seed selection for reliability analysis. Eng Comput 1–17. https://doi.org/10.1007/s00366-020-00961-9

  2. Abraham S, Tsirikoglou P, Miranda J, Lacor C, Contino F, Ghorbaniasl G (2018) Spectral representation of stochastic field data using sparse polynomial chaos expansions. J Comput Phys 367:109–120

    MathSciNet  MATH  Google Scholar 

  3. Au SK, Beck JL (2001) Estimation of small failure probabilities in high dimensions by subset simulation. Probab Eng Mech 16(4):263–277

    Google Scholar 

  4. Bhattacharyya B (2018) A critical appraisal of design of experiments for uncertainty quantification. Arch Comput Methods Eng 25(3):727–751

    MathSciNet  MATH  Google Scholar 

  5. Bhattacharyya B (2020) Global sensitivity analysis: a Bayesian learning based polynomial chaos approach. J Comput Phys 415(109539):1–22

    MathSciNet  MATH  Google Scholar 

  6. Bhattacharyya B, Jacquelin E, Brizard D (2019) Uncertainty quantification of nonlinear stochastic dynamic problem using a Kriging-NARX surrogate model. In: 3rd International conference on uncertainty quantification in computational sciences and engineering, pp 34–46. Crete, Greece

  7. Bhattacharyya B, Jacquelin E, Brizard D (2020) A Kriging-NARX model for uncertainty quantification of nonlinear stochastic dynamical systems in time domain. J Eng Mech 146(7):1–21

    Google Scholar 

  8. Bhattacharyya B, Jacquelin E, Brizard D (2020) Uncertainty quantification of stochastic impact dynamic oscillator using a proper orthogonal decomposition-polynomial chaos expansion technique. J Vib Acoust 142(6):1–13

    Google Scholar 

  9. Bishop CM (2006) Pattern recognition and machine learning. Springer, New York

    MATH  Google Scholar 

  10. Blatman G, Sudret B (2008) Sparse polynomial chaos expansions and adaptive stochastic finite elements using a regression approach. Comptes Rend Méc 336(6):518–523

    MATH  Google Scholar 

  11. Blatman G, Sudret B (2011) Adaptive sparse polynomial chaos expansion based on least angle regression. J Comput Phys 230(6):2345–2367

    MathSciNet  MATH  Google Scholar 

  12. Bourinet JM, Deheeger F, Lemaire M (2011) Assessing small failure probabilities by combined subset simulation and support vector machines. Struct Saf 33(6):343–353

    Google Scholar 

  13. Breitung K, Faravelli L (1994) Log-likelihood maximization and response surface in reliability assessment. Nonlinear Dyn 5(3):273–285

    Google Scholar 

  14. Chapelle O, Vapnik V, Bengio Y (2002) Model selection for small sample regression. Mach Learn 48(1–3):9–23

    MATH  Google Scholar 

  15. Cheng K, Lu Z (2018) Adaptive sparse polynomial chaos expansions for global sensitivity analysis based on support vector regression. Comput Struct 194:86–96

    Google Scholar 

  16. Cheng K, Lu Z (2018) Sparse polynomial chaos expansion based on D-MORPH regression. Appl Math Comput 323:17–30

    Google Scholar 

  17. Cheng K, Lu Z, Zhen Y (2019) Multi-level multi-fidelity sparse polynomial chaos expansion based on Gaussian process regression. Comput Methods Appl Mech Eng 349:360–377

    MathSciNet  MATH  Google Scholar 

  18. Doucet A, Freitas ND, Gordon N (2001) Sequential Monte Carlo methods in practice. Springer, New York

    MATH  Google Scholar 

  19. Echard B, Gayton N, Lemaire M (2011) AK-MCS: an active learning reliability method combining Kriging and Monte Carlo Simulation. Struct Saf 33(2):145–154

    Google Scholar 

  20. Efron B, Hastie T, Johnstone I, Tibshirani R (2004) Least angle regression. Ann Stat 32(2):407–499

    MathSciNet  MATH  Google Scholar 

  21. Fiessler B, Rackwitz R, Neumann HJ (1979) Quadratic limit states in structural reliability. J Eng Mech Div 105(4):661–676

    Google Scholar 

  22. Gaspar B, Teixeira A, Soares CG (2014) Assessment of the efficiency of Kriging surrogate models for structural reliability analysis. Probab Eng Mech 37:24–34

    Google Scholar 

  23. Gavin HP, Yau SC (2008) High-order limit state functions in the response surface method for structural reliability analysis. Struct Saf 30(2):162–179

    Google Scholar 

  24. Gayton N, Bourinet J, Lemaire M (2003) CQ2RS: a new statistical approach to the response surface method for reliability analysis. Struct Saf 25(1):99–121

    Google Scholar 

  25. Griffin JE, Brown PJ (2010) Inference with normal-gamma prior distributions in regression problems. Bayesian Anal 5(1):171–188

    MathSciNet  MATH  Google Scholar 

  26. Guimarães H, Matos JC, Henriques AA (2018) An innovative adaptive sparse response surface method for structural reliability analysis. Struct Saf 73:12–28

    Google Scholar 

  27. Guo L, Narayan A, Zhou T (2018) A gradient enhanced l1-minimization for sparse approximation of polynomial chaos expansions. J Comput Phys 367:49–64

    MathSciNet  MATH  Google Scholar 

  28. Hohenbichler M, Rackwitz R (1988) Improvement of second-order reliability estimates by importance sampling. J Eng Mech 114(12):2195–2199

    Google Scholar 

  29. Hosni Elhewy A, Mesbahi E, Pu Y (2006) Reliability analysis of structures using neural network method. Probab Eng Mech 21(1):44–53

    Google Scholar 

  30. Hu C, Youn BD (2010) Adaptive-sparse polynomial chaos expansion for reliability analysis and design of complex engineering systems. Struct Multidiscip Optim 43(3):1–24

    MathSciNet  Google Scholar 

  31. Huan X, Safta C, Sargsyan K, Vane ZP, Lacaze G, Oefelein JC, Najm HN (2018) Compressive sensing with cross-validation and stop-sampling for sparse polynomial chaos expansions. SIAM/ASA J Uncertaint Quantif 6(2):907–936

    MathSciNet  MATH  Google Scholar 

  32. Jacobs WR, Baldacchino T, Dodd TJ, Anderson SR (2018) Sparse Bayesian nonlinear system identification using variational inference. IEEE Trans Autom Control 63(12):4172–4187

    MathSciNet  MATH  Google Scholar 

  33. Jacquelin E, Baldanzini N, Bhattacharyya B, Brizard D, Pierini M (2019) Random dynamical system in time domain: a POD-PC model. Mech Syst Signal Process 133:106251

    Google Scholar 

  34. Jakeman JD, Eldred MS, Sargsyan K (2015) Enhancing l1-minimization estimates of polynomial chaos expansions using basis selection. J Comput Phys 289:18–34

    MathSciNet  MATH  Google Scholar 

  35. Jensen JLWV (1906) Sur les fonctions convexes et les inégalités entre les valeurs moyennes. Acta Math 30(1):175–193

    MathSciNet  MATH  Google Scholar 

  36. Kaymaz I (2005) Application of kriging method to structural reliability problems. Struct Saf 27(2):133–151

    Google Scholar 

  37. Kiureghian AD, Stefano MD (1992) Efficient algorithm for second-order reliability analysis. J Eng Mech 117(12):2904–2923

    Google Scholar 

  38. Li G, Rabitz H (2010) D-morph regression: application to modeling with unknown parameters more than observation data. J Math Chem 48(4):1010–1035

    MathSciNet  MATH  Google Scholar 

  39. Li X, Gong C, Gu L, Gao W, Jing Z, Su H (2018) A sequential surrogate method for reliability analysis based on radial basis function. Struct Saf 73:42–53

    Google Scholar 

  40. Low BK, Tang WH (2007) Efficient spreadsheet algorithm for first-order reliability method. J Eng Mech 133(12):1378–1387

    Google Scholar 

  41. Mackay DJ (1995) Probable networks and plausible predictions—a review of practical Bayesian methods for supervised neural networks. Network: Computation in Neural Systems 6(3):469-505.

  42. Neal RM, Hinton GE (1998) A view of the EM algorithm that justifies incremental, sparse, and other variants. Learning in graphical models. Springer, Berlin, pp 355–368

    Google Scholar 

  43. Parisi G (1988) Statistical field theory. Addison-Wesley, Boston

    MATH  Google Scholar 

  44. Peierls R (1938) On a minimum property of the free energy. Phys Rev 54(11):918–919

    MATH  Google Scholar 

  45. Rackwitz R, Fiessler B (1978) Structural reliability under combined random load sequences. Comput Struct 9(5):489–494

    MATH  Google Scholar 

  46. Rajashekhar MR, Ellingwood BR (1993) A new look at the response surface approach for reliability analysis. Struct Saf 12(3):205–220

    Google Scholar 

  47. Ross SM (2007) Introduction to probability models, 11th edn. Academic Press, New York

    Google Scholar 

  48. Salehi S, Raisee M, Cervantes MJ, Nourbakhsh A (2018) An efficient multifidelity l1-minimization method for sparse polynomial chaos. Comput Methods Appl Mech Eng 334:183–207

    MATH  Google Scholar 

  49. Shao Q, Younes A, Fahs M, Mara TA (2017) Bayesian sparse polynomial chaos expansion for global sensitivity analysis. Comput Methods Appl Mech Eng 318:474–496

    MathSciNet  MATH  Google Scholar 

  50. Sobol IM (1967) On the distribution of points in a cube and the approximate evaluation of integrals. USSR Comput Math Math Phys 7(4):86–112

    MathSciNet  MATH  Google Scholar 

  51. Sobol IM (1990) Quasi-Monte Carlo methods. Prog Nucl Energy 24(1–3):55–61

    Google Scholar 

  52. Song K, Zhang Y, Zhuang X, Yu X, Song B (2020) An adaptive failure boundary approximation method for reliability analysis and its applications. Eng Comput 1–16. https://doi.org/10.1007/s00366-020-01011-0

  53. Steiner M, Bourinet JM, Lahmer T (2019) An adaptive sampling method for global sensitivity analysis based on least-squares support vector regression. Reliab Eng Syst Saf 183:323–340

    Google Scholar 

  54. Tripathy RK, Bilionis I (2018) Deep UQ: learning deep neural network surrogate models for high dimensional uncertainty quantification. J Comput Phys 375:565–588

    MathSciNet  MATH  Google Scholar 

  55. Wang Z, Shafieezadeh A (2019) REAK: reliability analysis through error rate-based adaptive kriging. Reliab Eng Syst Saf 182:33–45

    Google Scholar 

  56. Wipf D, Nagarajan S (2008) A new view of automatic relevance determination. In Advances in neural information processing systems. Springer, New York, pp 1625–1632

    Google Scholar 

  57. Wu Z, Wang W, Wang D, Zhao K, Zhang W (2019) Global sensitivity analysis using orthogonal augmented radial basis function. Reliab Eng Syst Saf 185:291–302

    Google Scholar 

  58. Xiu D, Karniadakis GE (2002) The Wiener–Askey polynomial chaos for stochastic differential equation. SIAM J Sci Comput 24(2):619–644

    MathSciNet  MATH  Google Scholar 

  59. Xu J, Kong F (2018) A cubature collocation based sparse polynomial chaos expansion for efficient structural reliability analysis. Struct Saf 74:24–31

    Google Scholar 

  60. Zeng P, Li T, Chen Y, Jimenez R, Feng X, Senent S (2019) New collocation method for stochastic response surface reliability analyses. Eng Comput. https://doi.org/10.1007/s00366-019-00793-2

    Article  Google Scholar 

  61. Zhang L, Lu Z, Wang P (2015) Efficient structural reliability analysis method based on advanced kriging model. Appl Math Model 39(2):781–793

    MathSciNet  MATH  Google Scholar 

  62. Zhao H, Gao Z, Xu F, Zhang Y, Huang J (2019) An efficient adaptive forward–backward selection method for sparse polynomial chaos expansion. Comput Methods Appl Mech Eng 355:456–491

    MathSciNet  MATH  Google Scholar 

  63. Zhao YG, Ono T (1999) A general procedure for first/second-order reliability method (FORM/SORM). Struct Saf 21(2):95–112

    Google Scholar 

  64. Zhou J, Nowak AS (1988) Integration formulas to evaluate functions of random variables. Struct Saf 5(4):267–284

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Biswarup Bhattacharyya.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A: Formulation of VLB by mean field theory

Recall the VLB as given in Eq. (10):

$$\begin{aligned}&{\mathcal {L}}\left[ q\left( \varTheta \right) \right] \nonumber \\&\quad = \int q\left( \varTheta \right) \log \frac{p\left( \varTheta ,Y\right) }{q\left( \varTheta \right) } {\text {d}}\varTheta \end{aligned}$$
(51)
$$\begin{aligned}&\quad = \int q\left( \varTheta \right) \left[ \log p\left( \varTheta ,Y\right) -\log q\left( \varTheta \right) \right] {\text {d}}\varTheta \end{aligned}$$
(52)
$$\begin{aligned}&\quad = \int \prod _{i=1}^{n_v} q\left( \varTheta _i\right) \left[ \log p\left( \varTheta ,Y\right) -\sum _{i=1}^{n_v} \log q\left( \varTheta _i\right) \right] {\text {d}}\varTheta \end{aligned}$$
(53)
$$\begin{aligned}&\quad = \int q\left( \varTheta _i\right) \left[ \int \log p\left( \varTheta ,Y\right) \prod _{j\ne i}q\left( \varTheta _j\right) \right] {\text {d}}\varTheta _i \nonumber \\&\qquad - \int q\left( \varTheta _i\right) \log q\left( \varTheta _i\right) {\text {d}}\varTheta _i + {{\,\mathrm{const}\,}}\end{aligned}$$
(54)
$$\begin{aligned}&\quad = \int q\left( \varTheta _i\right) \log {\tilde{p}} \left( \varTheta _i,Y\right) {\text {d}}\varTheta _i \nonumber \\&\qquad - \int q\left( \varTheta _i\right) \log q\left( \varTheta _i\right) {\text {d}}\varTheta _i + {{\,\mathrm{const}\,}}\end{aligned}$$
(55)
$$\begin{aligned}&\quad = -{{\,\mathrm{KL}\,}}\left( q\left( \varTheta _i\right) \parallel {\tilde{p}}\left( \varTheta _i,Y\right) \right) + {{\,\mathrm{const}\,}}. \end{aligned}$$
(56)

A new probability distribution \({\tilde{p}}\left( \varTheta _i,Y\right)\) is defined in Eq. (55): it is also represented as follows [9]:

$$\begin{aligned} \log {\tilde{p}}\left( \varTheta _i,Y\right) = {\mathbb {E}}_{j\ne i} \left[ \log p \left( \varTheta ,Y\right) \right] , \end{aligned}$$
(57)

where \({\mathbb {E}}\left[ \bullet \right]\) represents the expectation with respect to the j-th parameters such that \(j\ne i\). It is evident from Eq. (51) that the VLB is maximum when the KL divergence is minimum: the KL divergence is minimum when \(q\left( \varTheta _i\right) = {\tilde{p}}\left( \varTheta _i,Y\right)\). Hence, the minimum KL divergence for the i-th parameter occurs at:

$$\begin{aligned} \log q\left( \varTheta _i\right) = {\mathbb {E}}_{j\ne i} \left[ \log p\left( \varTheta ,Y\right) \right] . \end{aligned}$$
(58)

Appendix B: Distributions for likelihood function and prior

The posterior is best estimated considering the mixed distribution [9]. Hence, the likelihood function is inferred by the normal distribution:

$$\begin{aligned} p\left( {\mathcal {Y}}|\varPsi ,a,\varsigma \right)= & {} \prod _{i=1}^{N} {\mathcal {N}}\left( {\mathcal {Y}}_i|\varPsi _i a,\varsigma ^{-1}\right) \end{aligned}$$
(59)
$$\begin{aligned}= & {} \left( \frac{\varsigma }{2\pi }\right) ^{\frac{N}{2}} \exp \left[ -\frac{\varsigma }{2}\sum _{i=1}^{N}\left( {\mathcal {Y}}_i-\varPsi _i a\right) ^2\right] , \end{aligned}$$
(60)

where \({\mathcal {N}}\left( \bullet \right)\) is referred as the normal distribution. To maintain the conjugacy [9], the prior is inferred by a conjugate normal-gamma distribution [25]:

$$\begin{aligned} p\left( a,\varsigma |\omega \right)= & {} {\mathcal {N}}\left( a|0,\left( \varsigma \omega \right) ^{-1}\right) {{\,\mathrm{Gam}\,}}\left( \varsigma |A_0,B_0\right) \end{aligned}$$
(61)
$$\begin{aligned}= & {} \left( 2\pi \right) ^{-\frac{n}{2}} \left| \varvec{\omega }\right| ^\frac{1}{2} \frac{B_0^{A_0}}{\varGamma \left( A_0\right) } \varsigma ^{\frac{n}{2}+A_0-1}\nonumber \\&\exp \left[ -\frac{\varsigma }{2}\left( a^T\varvec{\omega } a+2B_0\right) \right] , \end{aligned}$$
(62)

where \({{\,\mathrm{Gam}\,}}\left( \bullet \right)\) is the gamma distribution and \(A_0,B_0\) are the gamma distribution parameters for \(\varsigma\). \(\omega = \{\omega _1,\omega _2,\ldots ,\omega _{n}\}^T = {{\,\mathrm{diag}\,}}\left( \varvec{\omega }\right)\) is the hyper-prior. \(\varvec{\omega } \in {\mathbb {R}}^{n \times n}\) is the sparse matrix having only the diagonal terms. Further, hyper-prior is inferred by a gamma distribution:

$$\begin{aligned} p\left( \omega \right)= & {} \prod _{j=1}^{n} {{\,\mathrm{Gam}\,}}\left( \omega _j|C_0,D_0\right) \end{aligned}$$
(63)
$$\begin{aligned}= & {} \prod _{j=1}^{n} \frac{D_0^{C_0}}{\varGamma \left( C_0\right) } \omega _j^{C_0-1} \exp \left( -D_0\omega _j\right) , \end{aligned}$$
(64)

where \(C_0,D_0\) are the parameters of the gamma distribution for the hyper-prior.

Appendix C: Predictive distribution

The predictive distribution at the new samples \(\varXi _{{{\text {pred}}}}\) can be obtained having the available informations \({\mathcal {D}}\in \left\{ \varXi ,{\mathcal {Y}} \right\}\) [9]. Marginalizing over the parameters, the predictive distribution is:

$$\begin{aligned}&p\left( {\mathcal {Y}}_{{{\text {pred}}}}|{ {\hat{\varPsi }} }_{{{\text {pred}}}},{\mathcal {D}} \right) \nonumber \\&\quad =\iiint { p\left( {\mathcal {Y}}_{{{\text {pred}}}}|{ {\hat{\varPsi }} }_{{{\text {pred}}}},a,\varsigma \right) p\left( a,\varsigma ,\omega |{\mathcal {D}} \right) {\text {d}} a {\text {d}}\varsigma {\text {d}}\omega } \end{aligned}$$
(65)
$$\begin{aligned}&\quad \approx \iiint { p\left( {\mathcal {Y}}_{{{\text {pred}}}}|{ {\hat{\varPsi }} }_{{{\text {pred}}}},a,\varsigma \right) q\left( a,\varsigma \right) q\left( \omega \right) {\text {d}} a {\text {d}}\varsigma d\omega } \end{aligned}$$
(66)
$$\begin{aligned}&\quad \iint {\mathcal {N}}\left( {\mathcal {Y}}_{{{\text {pred}}}}|{ {\hat{\varPsi }} }_{{{\text {pred}}}}{ a }_{ k },{ \varsigma }^{ -1 } \right) {\mathcal {N}}\nonumber \\&\qquad \left( a|{ a }_{ k },{ \varsigma }^{ -1 }{ h }_{ k } \right) {{\,\mathrm{Gam}\,}}\left( \varsigma |{ A }_{ k },{ B }_{ k } \right) {\text {d}} a {\text {d}}\varsigma \end{aligned}$$
(67)
$$\begin{aligned}&\quad =\int {\mathcal {N}}\left( {\mathcal {Y}}_{{{\text {pred}}}}|{ {\hat{\varPsi }} }_{{{\text {pred}}}}{ a }_{ k },{ \varsigma }^{ -1 }\right. \nonumber \\&\qquad \left. \left( 1+{ {\hat{\varPsi }} }_{{{\text {pred}}}}{ h }_{ k }{ {\hat{\varPsi }} }_{{{\text {pred}}}}^{ T } \right) \right) {{\,\mathrm{Gam}\,}}\left( \varsigma |{ A }_{ k },{ B }_{ k } \right) {\text {d}}\varsigma \end{aligned}$$
(68)
$$\begin{aligned}&\quad ={{\,\mathrm{St}\,}}\left( {\mathcal {Y}}_{{{\text {pred}}}}|\mu ,\lambda ,\nu \right) . \end{aligned}$$
(69)

It is seen from Eq. (65) that the predictive distribution is Student’s t distribution (denoted by \({{\,\mathrm{St}\,}}\)) with the parameters \(\mu\), \(\lambda\) and \(\nu\). After constructing the adaptive BPCE model, the distribution parameters are given by:

$$\begin{aligned}&\mu ={ {\hat{\varPsi }} }_{{{\text {pred}}}} {\hat{a}} \end{aligned}$$
(70)
$$\begin{aligned}&\lambda =\frac{ { A }_{ k } }{ { B }_{ k } } { \left( 1+{ {\hat{\varPsi }} }_{{{\text {pred}}}}{ h }_{ k }{ {\hat{\varPsi }} }_{{{\text {pred}}}}^T \right) }^{ -1 } \end{aligned}$$
(71)
$$\begin{aligned}&\nu =2{ A }_{ k }. \end{aligned}$$
(72)

The standard deviation of the predictive distribution is given by:

$$\begin{aligned} \sigma = \sqrt{{ \left( 1+{ {\hat{\varPsi }} }_{{{\text {pred}}}}{ h }_{ k }{ {\hat{\varPsi }} }_{{{\text {pred}}}}^{ T } \right) } \frac{B_k}{A_k -1}}. \end{aligned}$$
(73)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bhattacharyya, B. Uncertainty quantification and reliability analysis by an adaptive sparse Bayesian inference based PCE model. Engineering with Computers 38 (Suppl 2), 1437–1458 (2022). https://doi.org/10.1007/s00366-021-01291-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00366-021-01291-0

Keywords

Navigation