Skip to main content

How Your Eyes Search a Scene

Spotting what we are looking for seems simple. It isn't

Consider this scenario: You are making dinner. You reach into a crowded kitchen drawer to find a paring knife. As you peel potatoes, you glance over at the basketball game on television to check out your team's performance. When your cell phone buzzes with a text message, you dry your hands and reply, picking out the letters one by one on the screen. These three actions—finding a knife, a moving basketball and letters of the alphabet—seem distinct, but all are examples of what is known in cognitive psychology as visual search—the ability to locate specific items in a crowded scene.

We find things so often and so effortlessly that we take this skill for granted, yet identifying what we are looking for is actually a complex psychological feat. The eyes gather tremendous amounts of sensory information—about color, motion, orientation, shape, light and shadow. The brain's task is to synthesize and prioritize all these data, helping us explore the world safely and profitably. Visual search involves not just sight but memory and abstract thought. We have to hold in mind what we are seeking, acquire a range of visual information, remember what we have seen and compare every new object with our mental target.

It cannot be overstated how much we rely on the ability to quickly and effectively search our surroundings. We do it thousands of times a day. Anyone who has spent time around young children knows that they can have a tough time finding a book or toy, even when it is right under their noses. But as we age, we develop useful shortcuts. For example, you know the next word in this sentence will be a short distance to the right. Moreover, when you visit the zoo you look up in the trees to spot a monkey and down toward the rocks to spot a snake. Such habits of mind make most searches easy, but as anyone who has ever opened a Where's Waldo? book knows, searches can also be vexing.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Several factors influence ease of search. The more similar a target is to its background, the trickier it is to find. Suppose you are walking through a snowy winter forest, and a red cardinal is perched on a bare branch. It will immediately “pop out” at you. But if you are looking for a wren in summer, when the bird is the same mottled brownish color as its woody surroundings, your task will be much more effortful. Finding unfamiliar objects—say, your parked rental car or someone else's lost earring—is also challenging for your brain, as is trying to locate multiple things at once.

Although we sometimes must work harder at visual search, we typically succeed. Yet for professional searchers, such as airport baggage screeners and the doctors who examine patients' routine x-rays on the lookout for incipient tumors, search is a high-stakes and often problematic endeavor. It is rare for a baggage screener to find a weapon or for a firefighter to find a living person in a pile of rubble. But the decision to stop looking is difficult because the cost of missing something could be tragic.

New investigations are suggesting ways to make those expert searches more reliable. And whereas habits and even personality determine how successful individuals are at finding what they seek, the latest studies indicate that people can train their visual systems to work more efficiently. Humans are adept visual searchers, and now psychology appears poised to reduce even our occasional failures.

What You See Is Not What You Get

People typically feel that their eyes move smoothly across the landscape of the world, continuously taking in what they are looking at, like a video camera. But that intuitive sense of how vision works is an illusion, carefully constructed by the brain.

In reality, our eyes are constantly roving in quick, jerky movements, rarely resting on any one part of the visual scene for more than about a third of a second before jumping to focus on something else. Human vision is a rhythmic alternation between looking intently and rapidly finding a new target, looking intently at that target, then rapidly shifting yet again. We take in a scene in multiple scattered snapshots that the brain stitches together into a seamless image. Scientists refer to these alternating eye movements as fixations and saccades. Fixations are the brief periods of looking, and saccades are the even shorter spans when our eyes are moving to their next target [see “Shifting Focus,” by Susana Martinez-Conde and Stephen L. Macknik; Scientific American Mind, November/December 2011].

If we actually saw what our eyes take in, the world would be a chaotic place. But the brain suppresses vision during saccades, so we do not experience the blurriness of those rapid eye movements. This is a seemingly unremarkable fact, until you consider its bizarre corollary: for much of our lives, and without realizing it, we are functionally blind.

Even within the small snapshots that our eyes provide, we cannot fully process all the visual information. The structures in the human eye that support high-resolution vision, called cones, are clustered in a central area of the retina known as the fovea. The other photosensitive structures, called rods, offer much less detail. As a result, we only clearly see the small region in the center of whatever we are looking at. Everything around it is indistinct. For a quick demonstration, try maintaining your focus on this point (*) while reading the words above or below it. Chances are, you can only parse the words situated one or two lines away.

Moving our eyes all around compensates for how little we see at any one time. When searching for a target, such as the login button on a Web site or the soccer ball during a game, you can bring potential areas of interest into focus, obtaining information in bite-size chunks. Your brain makes use of the indistinct information from the periphery of each snapshot to decide where to fixate next.

Also aiding your search is something known as selective attention. The brain focuses on isolated characteristics of the target—its color or shape or movement—and pays attention to those specific aspects of the environment, suppressing the rest. For example, let's say you have misplaced something, such as the ever elusive remote control. In scanning for this object, you will not spend much time gazing at a lampshade, the cat or anything bright or colorful. Instead your eyes will be drawn to other small, drab, rectangular objects, such as a cell phone or eyeglasses case. Whereas if you are in the park on a crowded Sunday looking for a friend whom you know is jogging, your attention will be drawn to moving people rather than those lazing around enjoying a picnic. To sift through the dozens of joggers, you further narrow your attention to someone with a beard and a Red Sox cap. The brain customizes every visual search, recruiting its independent faculties for recognizing shape, color, motion and size to swiftly zero in on the desired target.

Knowing what you are looking for dramatically improves success at visual search. There are situations, however, when expectations become a hindrance rather than an aid.

Out of Mind, Out of Sight

Professional searchers, such as the crews who look for survivors in storm wreckage, face a thornier problem than the rest of us. They are looking for something that they are unlikely to find—something that in the overwhelming majority of instances will not be present. Their predicament is dubbed the low prevalence effect, and it can greatly reduce accuracy. Indeed, a 2010 Norwegian study suggested that the rate of misses for the radiologists who pore over mammogram films looking for tumors is between 20 and 30 percent—a lot higher, we would presume, than your personal failure rate for finding your keys, and a lot more significant.

Miss rates were even greater when a team led by psychologist Trafton Drew of Harvard Medical School and Brigham and Women's Hospital asked 24 experienced radiologists to scan lung x-rays to look for tumors. Unbeknownst to the doctors, the research team had inserted a small picture of a gorilla into one of the slides. Yes, a gorilla. The primate was a reference to the well-known 1999 experiment by psychologist Daniel Simons in which people who were asked to count the passes during a ball game were often so absorbed in their task that they did not notice a person in a gorilla suit who walked through the game. The same kind of thing happened in Drew's lung-scan experiment, which was published earlier this year: a full 83 percent of the doctors failed to notice the gorilla image because they were looking for something else.

The gorilla findings are examples of inattentional blindness—the fact that people often do not notice what they are not paying attention to. The low prevalence effect is slightly different, in that the misses are because of an unconscious mental calculation, not lack of attention. In a series of experiments in 2007 psychologist Jeremy M. Wolfe, who heads the laboratory at Brigham and Women's Hospital where Drew is affiliated, investigated how the low prevalence effect complicates the work of airport baggage screeners.

In one study, the team asked 10 people to view collages consisting of semitransparent photographs of toys, birds, fruit, clothes and tools. (The participants were not baggage screeners, just assorted volunteers.) Subjects were told to find a tool, but 99 percent of the collages did not contain any. Under these conditions, individuals missed the target 39 percent of the time. But when half the displays contained a tool, the same people made mistakes just 6 percent of the time. That is a huge difference—and a troubling one. Simply put, when targets are rare, people often fail to see them because their visual attention systems learn not to expect anything.

The human mind automatically keeps track of how often a certain kind of thing is found in a specific location—probably because over evolutionary time, having realistic expectations led to more efficient hunting and foraging. But this useful mental habit plagues people such as baggage screeners who are tasked with the high-pressure responsibility of finding potentially catastrophic anomalies. These workers view hundreds (possibly thousands) of x-rayed bags without finding dangerous items, all the while unconsciously building up background knowledge that nothing unusual will be found. When a weapon does show up, then, it may not register in the screener's mind, precisely because it is unexpected.

Wolfe and his colleagues tried to counteract the low prevalence effect. First, they paired people with partners, hoping that if one person missed the target, the other would find it. Misses remained high. Next, they forced subjects to search more slowly, by giving them time warnings. That did not work either. The team also had people search simultaneously for common and rare targets. By increasing how often people found something, they hoped to lower their likelihood of missing anything. But this tactic also failed: misses remained high for the rare targets.

We want to emphasize that the participants in these experiments were not careless, incompetent or unmotivated. Nor are the doctors who miss cancers on x-ray films. The low prevalence effect cannot be counteracted by conscientiousness or sheer willpower. It is a quirk of human brain processing, and it happens to everyone. Wolfe's team eventually found, however, that it is possible to diminish this pernicious effect through training.

In their final experiment, they interspersed the search for rare items with brief periods during which the targets became common and searchers learned whether their decisions were correct. These interludes shifted people's expectations, making them more cautious and better prepared to find unusual targets. Miss errors were substantially reduced. The work suggests it might be useful to briefly retrain baggage screeners from time to time, by asking them to search x-rays in which half the bags contain weapons, then giving them feedback on their accuracy.

Professional searchers are not the only ones who can be trained to improve. New experiments suggest that everyone can become a better visual searcher.

The Eye of the Beholder

Although moving your eyes is easy and seemingly automatic, people do it in subtly different ways. Evan Risko, a psychologist at the University of Memphis, studies variations in how people view—literally—the world around them.

In a 2012 experiment Risko and his colleagues focused on an individual's desire to acquire new knowledge or sensory experiences. Participants completed two questionnaires that gauge curiosity levels. Then they looked at photographs of buildings, interiors and landscapes for 15 seconds apiece. The team used an eye tracker to precisely record each person's gaze. Those with greater curiosity visited more regions of the scenes, examining the details of each picture, rather than getting stuck looking in only a few places. The study was the first to suggest that personality type helps to determine one's method of examining things.

It turns out that people differ not only in how much they search but also in how they search. In a study published in 2010 psychologist Marcus Watson and his colleagues at the University of British Columbia recorded the eye movements subjects made while looking for a partial circle hidden among similar shapes on a computer screen. The investigators coached half the participants to use an active search approach and half to search passively. During active search, people move their eyes around more frequently. During passive search, they fixate for longer periods and move their eyes less. In Watson's experiment, the eye-tracking data showed that the passive searchers were more successful. When their eyes fell on the target, they were more likely to detect it, suggesting that they make better use of the information obtained from each fixation.

One can imagine how passive search might be advantageous in the real world. If you are shopping for specific salad ingredients, it might be most effective to wander around the produce aisle and let the desired vegetables “call” your attention as you broadly scan the displays. Passive search, however, is not always the most efficient strategy. If you are waiting for a friend at the mall, it might be helpful to use a “brute-force” approach, rapidly darting your attention around to a clothing store, a nearby coffee shop and the mall entrance.

In the study by Watson and his colleagues, participants tended to be either habitually active or passive seekers, but not both. When given specific instructions, however, everyone was capable of changing their eye movements. This finding implies that people could improve their search abilities by learning to flexibly implement an active or passive approach, depending on the circumstances.

Investigators in other areas, such as video gaming, are also finding that people can improve their search abilities. Avid gamers move their eyes more efficiently than others do in the service of a demanding task. Until recently, though, no one knew whether the games develop those skills or simply attract people who already have them. Neuroscientist Daphne Bavelier of the University of Rochester and the University of Geneva and C. Shawn Green of the University of Wisconsin–Madison set out to answer that question. What they found is evidence that playing video games improves perceptual abilities. Nongamers who spent time with the action-filled Unreal Tournament 2004 improved on a test of visual acuity at which gamers excel—picking out the orientation of a T shape among other T shapes [see “Brain-Changing Games,” by Lydia Denworth; Scientific American Mind, January/February 2013]. And other research suggests that video games can train people to find targets more quickly.

Vision of the Future

Yet games are old news compared with a technology called stroboscopic vision training. Just as long-distance runners practice in the low-oxygen conditions at high altitude to improve their overall performance, vision scientists are obscuring human vision to make it stronger. Stephen R. Mitroff, L. Gregory Appelbaum and their colleagues at Duke University are experimenting with Nike-designed eyewear called Vapor Strobe—goggles that alternate between transparency and opacity, constantly interrupting the wearer's view.

In a 2012 study participants were asked to stare at a cross for just less than half a second (they were not wearing the goggles at this point). As they looked, eight letters, organized in a circle around the cross, appeared for a tenth of a second. This is not enough time for an eye movement, so people could not look at any of the letters directly. Next, after a variable period (ranging from one hundredth of a second to two and a half seconds), a line appeared, pointing to one of the previous letter locations. Individuals had to report which letter had been in that spot. This sounds difficult, but it is a classic test that has been used since the 1960s with great success. When the delay is brief, people are typically 90 percent accurate, revealing that we have a remarkable capacity to retain visual information for short periods.

The next step in this experiment was entirely new: the participants, some of them university athletes, engaged in a variety of physical activities—playing catch, passing a soccer ball or basketball, and practicing dribbling. Half the volunteers wore the stroboscopic eyewear, and half wore eyewear that looked identical but did not interrupt visual input. The first group's task was difficult because they got only momentary glimpses of the location, trajectory and speed of the ball. To move into the right positions to perform a catch, participants had to make efficient use of the visual information they did receive. In essence, this task encouraged their visual systems to work more effectively.

After the physical training sessions, both groups did the letter-identification task again. Both groups performed better on average, but the stroboscopic group showed far larger improvements, suggesting that the training helped them better capture and hold visual memories. Moreover, in a second experiment, participants underwent the same protocol but were not retested on the letter task until 24 hours after the physical training. Still, they showed improvement, demonstrating that the benefits of stroboscopic training are retained for at least a day. Mitroff and his colleagues have used the technology with professional athletes, including the Carolina Hurricanes, an NHL hockey team. That research has not yet been published. But if it reveals that the goggles aid top-notch athletes, whose visual systems are already finely tuned, that would be compelling evidence. Regular people would probably get an even bigger effect. And for athletes, even a tiny visual boost would confer the competitive edge that every team wants.

The take-home message from these various strands of research—on stroboscopic effects, video gamers, baggage screeners, and active versus passive looking—is that with training, people can become better searchers. That should be welcome news to anyone, whether you are a race-car driver or a Boggle-playing retiree. We are a species of seekers, constantly on the lookout for novelty, beauty, companionship, sustenance and meaning. Therefore, it seems perfectly fitting that science is seeking—and finding—ways to improve how we search.

(Further Reading)

Eye and Brain: The Psychology of Seeing. Fifth edition. Richard L. Gregory. Princeton University Press, 1997.

Why We Don't See Lions, Bombs and Breast Cancers. Jeremy M. Wolfe. Published online in “Mind Matters”; Scientific American Mind, December 20, 2011.

Brain Plasticity through the Life Span: Learning to Learn and Action Video Games. Daphne Bavelier, C. Shawn Green, Alexandre Pouget and Paul Schrater in Annual Reviews of Neuroscience, Vol. 35, pages 391–416; 2012.

Stroboscopic Training Enhances Anticipatory Timing. Trevor Q. Smith and Stephen R. Mitroff in International Journal of Exercise Science, Vol. 5, No. 4, pages 344–353; 2012.

The Invisible Gorilla Strikes Again: Sustained Inattentional Blindness in Expert Observers. Trafton Drew, Melissa L.-H. Võ and Jeremy M. Wolfe in Psychological Science (in press).

Michael C. Hout is an assistant professor in the department of psychology at New Mexico State University and principal investigator of the Vision Sciences and Memory Laboratory there.

More by Michael C. Hout
SA Mind Vol 24 Issue 3This article was originally published with the title “To See or Not to See” in SA Mind Vol. 24 No. 3 (), p. 60
doi:10.1038/scientificamericanmind0713-60