Skip to main content
Log in

Overestimation Bias in Self-reported SAT Scores

  • Original Article
  • Published:
Educational Psychology Review Aims and scope Submit manuscript

Abstract

The authors analyzed self-reported SAT scores and actual SAT scores for five different samples of college students (N = 650). Students overestimated their actual SAT scores by an average of 25 points (SD = 81, d = 0.31), with 10% under-reporting, 51% reporting accurately, and 39% over-reporting, indicating a systematic bias towards over-reporting. The amount of over-reporting was greater for lower-scoring than higher-scoring students, was greater for upper division than lower division students, and was equivalent for men and women. There was a strong correlation between self-reported and actual SAT scores (r = 0.82), indicating high validity of students’ memories of their scores. Results replicate previous findings (Kuncel, Credé, & Thomas, 2005) and are consistent with a motivated distortion hypothesis. Caution is suggested in using self-reported SAT scores in psychological research.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. We note that the educational psychology class showed the greatest overestimation bias. All students in the class are psychology majors, and the psychology major is one of the most competitive on campus. Our goal in this project was not to make systematic comparisons across classes, but further research on this topic is warranted. We also recognize many of the upper division students come from this class, so further research is warranted concerning differences between upper and lower division students.

  2. Given that three t-tests were conducted on the same data (partitioned for high versus low SAT score, lower- versus upper-division, and men versus women, respectively), there exists the danger that Type 1 error was inflated. To address this issue, we applied a Bonferroni procedure, which showed that all significant differences reported in the results section remained statistically significant.

  3. We distinguish between validity—the degree to which the reported scores correlate with actual scores—and bias—the degree to which reported score is systematically greater than actual score. It is possible for self-reported scores to be valid and biased, as was the case in this study.

References

  • Halpern, D. F. (2000). Sex differences in cognitive abilities (3rd ed.). Mahwah, NJ: Erlbaum.

    Google Scholar 

  • Kuncel, N. R., Credé, M., & Thomas, L. L. (2005). The validity of self-reported grade point averages, class ranks, and test scores: A meta-analysis and review of the literature. Review of Educational Research, 75, 63–82.

    Article  Google Scholar 

  • Zwick, R. (2002). Fair game? The use of standardized admissions tests in higher education. New York: Routledge Falmer.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Richard E. Mayer.

Additional information

This research was supported by a grant from the Andrew W. Mellon Foundation.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Mayer, R.E., Stull, A.T., Campbell, J. et al. Overestimation Bias in Self-reported SAT Scores. Educ Psychol Rev 19, 443–454 (2007). https://doi.org/10.1007/s10648-006-9034-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10648-006-9034-z

Keywords

Navigation