Back To Index Previous Article Next Article Full Text

Statistica Sinica 33 (2023), 1507-1532

MARGINAL BAYESIAN POSTERIOR INFERENCE
USING RECURRENT NEURAL NETWORKS
WITH APPLICATION TO SEQUENTIAL MODELS

Thayer Fisher, Alex Luedtke, Marco Carone and Noah Simon

University of Washington

Abstract: In Bayesian data analysis, it is often important to evaluate quantiles of the posterior distribution of a parameter of interest (e.g., to form posterior intervals). In multi-dimensional problems, when non-conjugate priors are used, this is often difficult, generally requiring either an analytic or a sampling-based approximation, such as the Markov chain Monte Carlo, approximate Bayesian computation, or variational inference. We discuss a general approach that reframes this as a multi-task learning problem and uses recurrent deep neural networks (RNNs) to approximately evaluate the posterior quantiles. Because RNNs carry information along a sequence, this application is particularly useful in time series. An advantage of the risk-minimization approach is that we do not need to sample from the posterior or calculate the likelihood. Lastly, we illustrate the proposed approach using several examples.

Key words and phrases: Bayesian deep learning, machine learning, quantile estimation.

Back To Index Previous Article Next Article Full Text