EGU24-6281, updated on 08 Mar 2024
https://doi.org/10.5194/egusphere-egu24-6281
EGU General Assembly 2024
© Author(s) 2024. This work is distributed under
the Creative Commons Attribution 4.0 License.

Tail calibration of probabilistic forecasts

Sam Allen1, Jonathan Koh2, and Johanna Ziegel1
Sam Allen et al.
  • 1Seminar for Statistics, ETH Zurich, Switzerland (saallen@ethz.ch)
  • 2Institute of Mathematical Statistics and Actuarial Science, University of Bern, Switzerland

Probabilistic forecasts comprehensively describe the uncertainty in the unknown future outcome, making them essential for decision making and risk management. While several methods have been introduced to evaluate probabilistic forecasts, existing evaluation techniques are ill-suited to the evaluation of forecasts for extreme events, which are often of particular interest due to the impact they have on forecast users. In this work, we reinforce previous results related to the deficiencies of proper scoring rules when evaluating forecasts for extreme outcomes, demonstrating that classes of scoring rules cannot distinguish between forecasts with the incorrect tail behaviour. Alternative methods to evaluate forecasts for extreme events are therefore required. To this end, we introduce several notions of tail calibration for probabilistic forecasts, which allow forecasters to assess the reliability of their predictions for extreme outcomes. We study the relationships between these different notions, and provide several examples. We then demonstrate how these tools can be applied in practice by implementing them in a case study on European precipitation forecasts.

How to cite: Allen, S., Koh, J., and Ziegel, J.: Tail calibration of probabilistic forecasts, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-6281, https://doi.org/10.5194/egusphere-egu24-6281, 2024.