Controlling the False-Discovery Rate in Astrophysical Data Analysis

, , , , , , , , and

© 2001. The American Astronomical Society. All rights reserved. Printed in U.S.A.
, , Citation Christopher J. Miller et al 2001 AJ 122 3492 DOI 10.1086/324109

1538-3881/122/6/3492

Abstract

The false-discovery rate (FDR) is a new statistical procedure to control the number of mistakes made when performing multiple hypothesis tests, i.e., when comparing many data against a given model hypothesis. The key advantage of FDR is that it allows one to a priori control the average fraction of false rejections made (when comparing with the null hypothesis) over the total number of rejections performed. We compare FDR with the standard procedure of rejecting all tests that do not match the null hypothesis above some arbitrarily chosen confidence limit, e.g., 2 σ, or at the 95% confidence level. We find a similar rate of correct detections, but with significantly fewer false detections. Moreover, the FDR procedure is quick and easy to compute and can be trivially adapted to work with correlated data. The purpose of this paper is to introduce the FDR procedure to the astrophysics community. We illustrate the power of FDR through several astronomical examples, including the detection of features against a smooth one-dimensional function, e.g., seeing the "baryon wiggles" in a power spectrum of matter fluctuations, and source pixel detection in imaging data. In this era of large data sets and high-precision measurements, FDR provides the means to adaptively control a scientifically meaningful quantity—the fraction of false discoveries over total discoveries.

Export citation and abstract BibTeX RIS

Please wait… references are loading.
10.1086/324109