Skip to main content

Regression Approximations to Estimate Sensitivities

  • Chapter
  • First Online:
Uncertainty Quantification and Predictive Computational Science
  • 1941 Accesses

Abstract

Chapter 5 explores the idea of using regression problems to estimate sensitivities. Section 5.1 explains how one might approximate the gradient of the QoI at a nominal point using a least-squares (regression) formulation. This naive approach requires more QoI evaluations than one-sided finite differences as described in the previous chapter. Section 5.2 introduces a regularization term into the least-squares minimization problem, allowing for useful solutions also for the case where fewer QoI evaluations than parameters are available; sparsity-promoting regularization (1-norm, LASSO) and a combination of 1-norm and 2-norm (elastic net) are considered. Section 5.3 adds cross-validation techniques for selecting the regularization parameters.

Wo! Nemo, toss a lasso to me now!

—Dona Smith

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    We have switched the notation for number of parameters here so that when we form matrices the indices will be the common i and j for row and column, respectively.

  2. 2.

    The extra solve comes from needing to compute \(Q(\bar {\mathbf {x}})\).

  3. 3.

    The constraint form can be changed into the penalty form by considering λ as a Lagrange multiplier. There is a one-to-one relationship between λ and s.

  4. 4.

    Latin hypercube designs are covered in Sect. 7.2.2.

References

  • Hastie T, Tibshirani R, Friedman J (2009) The elements of statistical learning. Data Mining, Inference, and Prediction, 2nd edn. Springer Science & Business Media, New York

    Google Scholar 

  • Hoerl AE, Kennard RW (1970) Ridge regression: biased estimation for nonorthogonal problems. Technometrics 12(1):55–67

    Article  Google Scholar 

  • Tibshirani R (1996) Regression shrinkage and selection via the lasso. J R Stat Soc Ser B (Methodological) 58(1):267–288

    MathSciNet  MATH  Google Scholar 

  • Zou H, Hastie T (2005) Regularization and variable selection via the elastic net. J R Stat Soc Ser B (Stat Methodol) 67(2):301–320

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

McClarren, R.G. (2018). Regression Approximations to Estimate Sensitivities. In: Uncertainty Quantification and Predictive Computational Science. Springer, Cham. https://doi.org/10.1007/978-3-319-99525-0_5

Download citation

Publish with us

Policies and ethics