Skip to main content

Fudge Factor: A Look at a Harvard Science Fraud Case

Did Marc Hauser know what he was doing?

As of this writing, the precise nature of Marc Haus­er’s transgressions remains murky. Haus­er is Harvard’s superstar primate psychologist—and, perhaps ironically, an expert on the evolution of morality—whom the university recently found guilty of eight counts of scientific misconduct. Harvard has kept mum about the details, but a former lab assistant alleged that when Hauser looked at videotapes of rhesus monkeys, in an experiment on their capacity to learn sound patterns, he noted behavior that other people in the lab couldn’t see, in a way that consistently favored his hypothesis. When confronted with these discrepancies, the assistant says, Hauser asserted imperiously that his interpretation was right and the others’ wrong.

Hauser has admitted to committing “significant mistakes.” In observing the reactions of my colleagues to Hauser’s shocking comeuppance, I have been surprised at how many assume reflexively that his misbehavior must have been deliberate. For example, University of Maryland physicist Robert L. Park wrote in a Web column that Hauser “fudged his experiments.” I don’t think we can be so sure. It’s entirely possible that Hauser was swayed by “confirmation bias”—the tendency to look for and perceive evidence consistent with our hypotheses and to deny, dismiss or distort evidence that is not.

The past few decades of research in cognitive, social and clinical psychology suggest that confirmation bias may be far more common than most of us realize. Even the best and the brightest scientists can be swayed by it, especially when they are deeply invested in their own hypotheses and the data are ambiguous. A baseball manager doesn’t argue with the umpire when the call is clear-cut—only when it is close.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Scholars in the behavioral sciences, including psychology and animal behavior, may be especially prone to bias. They often make close calls about data that are open to many interpretations. Last year, for instance, Belgian neurologist Steven Laureys insisted that a comatose man could communicate through a keyboard, even after controlled tests failed to find evidence. Climate researchers trying to surmise past temperature patterns by using proxy data are also engaged in a “particularly challenging exercise because the data are incredibly messy,” says David J. Hand, a statistician at Imperial College London.

Two factors make combating confirmation bias an uphill battle. For one, data show that eminent scientists tend to be more arrogant and confident than other scientists. As a consequence, they may be especially vulnerable to confirmation bias and to wrong-headed conclusions, unless they are perpetually vigilant. Second, the mounting pressure on scholars to conduct single-hypothesis-driven research programs supported by huge federal grants is a recipe for trouble. Many scientists are highly motivated to disregard or selectively reinterpret negative results that could doom their careers. Yet when members of the scientific community see themselves as invulnerable to error, they impede progress and damage the reputation of science in the public eye. The very edifice of science hinges on the willingness of investigators to entertain the possibility that they might be wrong.

The best antidote to fooling ourselves is adhering closely to scientific methods. Indeed, history teaches us that science is not a monolithic truth-gathering method but rather a motley assortment of tools designed to safeguard us against bias. In the behavioral sciences, such procedures as control groups, blinded designs and independent coding of data are essential methodological bulwarks against bias. They minimize the odds that our hypotheses will mislead us into seeing things that are not there and blind us from seeing things that are. As astronomer Carl Sagan and his wife and co-author Ann Druyan noted, science is like a little voice in our heads that says, “You might be mistaken. You’ve been wrong before.” Good scientists are not immune from confirmation bias. They are aware of it and avail themselves of procedural safeguards against its pernicious effects.