Looks Real, or Really Fake? Warnings, Visual Attention and Detection of False News Articles

Abstract In recent years, online misinformation designed to resemble news by adopting news design conventions has proven to be a powerful vehicle for deception and persuasion. In a 2 (prior warning: present/absent) x 2 (article type: false/true) eye-tracking experiment, news consumers (N=49) viewed four science news articles from unfamiliar sources, then rated each article for credibility before being asked to classify each as true news or as false information presented as news. Results show that reminding participants about the existence of fake news significantly improved correct classification of false news articles, but did not lead to a significant increase in misclassification of true news articles as false. Analysis of eye-tracking data showed that duration of visual attention to news identifier elements, such as the headline, byline, timestamp on a page, predicted correct article classification. Implications for consumer education and information design are discussed.

Misinformation spread through digital media is a complex problem that became more salient in consumers' minds as a result of coverage of the role it may have played in the 2016 United States presidential election. Particular concern has been raised about the role of false or so-called "fake" news, or misinformation that resembles news content in presentation and format. Lazer and colleagues define fake news as "fabricated information that mimics news media content in form but not in organizational process or intent (2018, p.194)." While so-called "clickbait" has been seen as a nuisance and mild threat to journalistic credibility for years (e.g. Chen, Conroy, & Rubin, 2015), the deliberate spread of fictitious news articles to sway public opinion has brought the phrase "fake news" into the public discourse (Tandoc, Lim, and Ling, 2018), and has raised new concerns about how consumers evaluate information.
Online misinformation, fueled by the speed and ease of social media dissemination, has been under the lens of social scientists for much of this decade, although much of the research has focused on the effects of misinformation posted directly on social media (Bode & Vraga, 2015;Shin, Jian, Driscoll, & Bar, 2011), using media messages to correct misinformed beliefs (Sangalang, Ophir, & Cappella, 2019;Hameleers & van der Meer, 2019;van der Meer & Jin, 2019), and retracting information in news articles (e.g., Gordon, Brooks, Quadflieg, Ecker, & Lewandowsky, 2017). Less studied, but crucial to the understanding of misinformation psychology, have been the process through which consumers view, process and evaluate the veracity of online articles from unfamiliar sources that use news design elements to make misinformation seem factual or credible. These false news articles rely on social media for their spread (Nelson & Taneja, 2018), to an even greater degree than other online news stories do (Vosoughi, Roy & Aral, 2018;Allcott & Gentzkow, 2017). As a result, an article from a Web publisher whose site few users would visit on their own can still end up being seen by tens or hundreds of thousands of social media users, passed on by consumers who think the information is valid, and sometimes amplified by bots or false accounts (McCright & Dunlap, 2017;Shao et al, 2018).
As publishers, politicians, and social scientists seek answers to the question of how to curb the spread of misinformation, information about how and when consumers make decisions about the credibility of novel online information --especially from unknown sources --would be a valuable piece of the puzzle. While journalism credibility is a well-researched construct (Borah, 2014;Cassidy 2007;Sundar, Knobloch-Westerwick, & Hastall 2007), it's unclear how message characteristics such as content and design interact with user motivations to shape credibility judgments. The present research utilized a 49-participant mixedfactorial eye-tracking experiment in the aim of contributing to the nascent literature on false news and information processing in three key areas. First, it sought to provide one of the first examinations of how online news readers visually attend to areas of the article page on fake news stories in comparison to real news stories. Secondly, by examining the effects of forewarning participants about the existence of misinformation on science topics, the study sought to examine whether the salience of fake news played a role in which elements of news stories participants viewed. Thirdly, the study sought to gauge the relationship between users' visual attention to several design elements on the article pages, such as source information, story recency and authorship information, internal story links, and external page links, on consumers' credibility evaluations and ability to detect "fake" news stories.

Literature Review
The selective presentation of information to achieve a persuasive goal can trace its lineage back to at least early religious propaganda of the 16th century. However misinformation styled to resemble professionally produced news, also known as false news or "fake" news (Brennen, 2017), reached the public consciousness in part due to concerns about its potential role in influencing the 2016 Brexit vote in the U.K. and the outcome of the 2016 U.S. presidential election. During the election several politically centered fake news stories were spread via social media platforms and several online communities. While the reach of the fake news stories, and their influence on the election has been debated (Guess, Nyhan, & Riefler, 2018), it's likely that the stories influenced the behavior of some voters, and that coverage of them brought the problem of misinformation and public attention. One analysis documented that articles that can be identified as fake news increased more than tenfold between 2014 and 2016 (Vargo, Guo & Amazeen, 2018).
In the months following the 2016 election, the Washington Post published an exposé that placed the blame for the flood of fake news on sophisticated Russian bots that pumped out a steady stream of lies that "exploited American-made technology platforms to attack U.S. democracy at a particularly vulnerable moment, as an insurgent candidate harnessed a wide range of grievances to claim the White House" (Timberg, 2016). According to Clint Watts, a fellow at the Foreign Policy Research Institute who tracked Russian propaganda since 2014, the purpose of this interference went far beyond a preference for one candidate over the other (Timberg, 2016). Perhaps the most notorious example was a story known as "Pizzagate" (Aisch, Huang & Kang, 2016), suggesting that several prominent Democratic politicians were involved in a human trafficking ring coordinated from a Washington D.C. area pizza parlor. The story broke via a white supremacist twitter account on October 30, 2016, and quickly spread through fringe sites such as GodlikeProductions.com and reached the "mainstream internet" through the discussion forum Reddit (Aisch et al, 2016;Berghel, 2017). Although, subsequent investigations found that none of the accusations were rooted in truth, by then Clinton had lost the election. The mainstream press, in the process of reporting on the spread of the rumor and debunking its claims, may have contributed to legitimizing the hoax (Mihailidis & Viotty, 2017).
In the years since 2016, the spread of misinformation related to political and health topics continues to plague the Internet. Misinformation related to health and science, such as the the potential danger of vaccines, climate change denial and the potential danger of genetically modified organisms is often steeped in the same partisanism and ideology as politicized misinformation (Paarlberg, 2001;Christoforou, 2004;Colgrove, 2006;Giddens, 2009;Bernauer, 2013;Gostin, 2015). Bots, which are applications that run automated tasks over the internet, and trolls, which are malicious accounts set up to antagonize and spread misinformation (Badawy, Ferrara & Lerman, 2018), have also become a growing public health problem. Broniatowski et. al. (2018) found that Russian bots and trolls had been spreading misinformation regarding the safety and effectiveness of vaccines.
The spread of health and science misinformation can have negative consequences for both those who believe it and the general public. In the last two years there have been three separate Measles outbreaks in the United States that were all found to have links to anti-vaccines communities (Zipprich et al., 2015;Gastañaduy et. al, 2016;Hall et al., 2017). In fact, after an outbreak in Minnesota in 2017 that caused 79 cases of measles within a Somali immigrant community, local health department officials found that many members of the community were more ardent in the refusal to vaccinate (Sun, 2017). The choice to not vaccinate children, particularly with the MMR vaccine is generally related to the work of Andrew Wakefield, a former gastroenterologist and medical researcher who published the article "Ileal-lymphoidnodular hyperplasia, non-specific colitis, and pervasive developmental disorder in children" in the medical journal The Lancet in 1998. Wakefield claimed that he had discovered a link between a preservative found in some vaccines and developmental disorders in children (Wakefield et al. 1998). Multiple attempts to replicate Wakefield's results failed and The Lancet eventually issued first a partial and then a full retraction of the paper. Communities of "anti-vaxxers" began to form in the years after the research was originally published, despite the results and data being found to be fraudulent and Wakefield facing criminal charges for his claims (Flaherty, 2011).
Fringe communities such as "anti-vaxxers" rely upon social media platforms to spread misinformation amongst each other and to convert the uninformed into believers. Topics related to scientific news and conspiracy theories create polarized and homogeneous communities with distinct information consumption patterns (Del Vicario et. al, 2016). The study used a massive quantitative analysis of Facebook which found that these echo chambers creates a self-reinforcing flow of information that is driven by confirmation bias. These communities usually single out a topic that is ideologically driven, such as climate change denial, the autism-vaccine link or the Ebola outbreak of 2014, and cultivate a narrative that fits their belief system in spite of substantial evidence to the contrary (Harvey, 2016). Like Pizzagate, these unproven narratives often spread from small pockets on Facebook to the "mainstream internet" and what was once a fringe belief begins to gain clout and credibility. Once this credibility has been gained the line between legitimate information and information that is biased, unverified becomes difficult to distinguish. Purveyors of intentionally fake news then exploit this fractured information environment to spread misinformation.

Where Fake News Comes From
Producers of fake news tend to share two common characteristics that differentiate them from other, even heavily slanted, news producers. First, they have no investment or interest in accurate reporting or the inherent truth of their content. Second they are not interested in their long run reputation or credibility, but instead focus on short-term exposure and number of clicks in an initial period of exposure (Allcott & Gentzkow, 2017;Shane, 2017;Rochlin, 2017).
Fake news is often a for-profit venture (Silverman & Alexander, 2016;Soares, 2017), modeled on the idea that advertisers are less interested in the content of a website and are instead interested only in how many clicks an article can generate. This allows sites to create sensational clickbait-like headlines that generate large numbers of visits resulting in more engagements with the advertisements (Burkhardt, 2017). However, ideology is also a key motivator for some fake news producers and there have been many instances where individuals create fake news to promote a particular candidate or position (Wardle, 2017;Allcott & Gentzkow, 2017).
Defining exactly what fake news looks like is difficult, as the term has come to mean a rather large variety of information ranging from satire to blatant and maliciously false material (Tandoc, Lim & Ling, 2018;Wardle, 2017). One of the most deceptive types of fake news is true information that is presented in such a way that it becomes misleading because of the information's connection to deceptive and sensational headlines and visuals (Mourãou & Robertson, 2019), usually with the aim of drawing traffic to an article (Wardle, 2017). However, presenting material in this way still has the potential to be highly misleading as headlines have been shown to have a significant biasing effect on the way readers view the rest of the content. Misleading headlines also significantly impact the way audiences remember the articles they read (Ecker, Lewandowsky, Chang & Pillai, 2014).
Similarly, it is common for fake news pieces to take true content and present it in a completely false context (Wardle 2017). For example, websites like Politico.com and Politifact.com noticed that in a television ad for Donald Trump in the 2016 presidential election there was a clip of video of many people running across a clear political border. It was presented in a misleading context resulting in viewers to believe that this was video of people streaming into the United States from Mexico. However, this specific video clip was not of the Mexican-American border at all, but rather video from an Italian television network showing the border between Morocco and the Spanish owned city of Melilla (Emery, Jacobson, 2016;Collins, 2016). While not necessarily presenting false or manufactured content, the false context that this video clip was presented in is clearly misleading (Wardle, 2016).
Finally, fake news websites often are intentionally constructed and named so that they resemble websites of more credible news sources (Allcott & Gentzkow, 2017;Wardle, 2016;Lazer et al., 2018). An example of such would be the fake news website abcnews.com.co, which very closely resembles the American Broadcasting Company's news website abcnews.go.com (Berghel, 2017). Similarly, there was a website called the Denver Guardian that received a lot of circulation on social media for a story entitled "FBI Agent Suspected in Hillary Email Leaks Found Dead in Apparent Murder-Suicide" (Allcott & Gentzkow, 2017;Lubbers, 2016). The Denver Guardian claimed on their website that they were Denver's oldest source of news and the site was built to resemble that of a newspaper's website. However, not only was this not Denver's oldest news source, but the story was proven to be completely false (Allcott & Gentzkow, 2017;Lubbers, 2016;Berghal, 2017). Even so, the story was shared on social media over half a million times (Sydell, 2016). This type of fabricated content is perhaps the most deceptive form of fake news as it is completely false and usually designed to do some type of harm (Wardle 2017).

Consumers' Identification and Evaluation of Fake News
Given the heightened concern about the effects of consumers being "duped" by misinformation on political and social discourse, researchers have sought to identify effective interventions to decrease the likelihood of consumers falling for misinformation, particularly in the form of false news articles and social media posts. Two broad categories of such research can be defined by the timing of the intervention: either "inoculation" approaches that include a warning delivered prior to a user's exposure to an article, or exposure to "correction" after reading the article. While these are not the only two approaches --some scholars have focused on increasing media literacy through formal education programs (e.g., Turner et al, 2017) or game interventions (Roozenbeek & van der Linden, 2018) --they have been the ones most broadly examined to date. The justification for literacy-based approaches finds some support in studies that show relationships between news media literacy and decreased beliefs in online conspiracies (Craft, Ashley, & Maksl, 2017).
There appears to be a disconnect between news consumers' confidence in their ability to recognize fake news and their actual ability to do so in practice. News readers generally view themselves as able to recognize: fake news at a higher rate than average others, thus indicating evidence of a third-person effect (Jang & Kim, 2018). That said, motivation for accuracy is just one of many competing motivations that drive consumers' behavior when viewing online content (Britt, Rouet, Blaum, & Millis, 2018). Some have argued that solely literacy-based approaches are doomed to fail because they fail to take into account partisan motivations in media consumption, and differences between the scrutiny consumers apply to media messages that supports pre-existing ideology and those that oppose it (boyd, 2017;Tully & Vraga, 2017).
Correction studies have employed a variety of methods of seeking to inform consumers that information they have already consumed is inaccurate or deliberately misleading (for a review, see Cook, Ecker, & Lewandowsky, 2015). These corrections can take place in the form of replies or comments on social media posts (Vraga & Bode, 2017). Studies of correction approaches have shown a persistent effect of misinformation consumption; despite exposure to correction, some portion of message consumers will continue to believe and rely upon misinformation in decision-making (Lewandowsky, Ecker, & Cook, 2017).
Inoculation approaches have instead gauged the effects of reminding online news readers of the existence or threat of misinformation, either prior to general browsing, or prior to clicking a specific disputed article. There are several advantages to warning based approaches over corrective approaches for reducing the influence of misinformation. First, scholars have shown that as users encode information, they build mental models of the situation (Bower & Morrow, 1990;Swire & Ecker, 2018). Encoding information from a fake news story and then being informed that the information is not true requires users to wholly discard that mental model. If users are given an alternative explanation for the information which is complete, they may still discard the model, and let the new information supplant it. However, if users cannot replace the information with a complete model, they show persistent reliance on misinformation to complete it (e.g., Ecker, Lewandowsky & Apai, 2011). As a result, effects of misinformation persist after correction, especially attitudes and judgments based on the correction.
Because a warning should heighten users' defensive processing of content they are exposed to, it should increase the likelihood of users applying greater scrutiny to the content and/or design of fake news pages, which should lead to greater recognition of fake news articles. Based on the literature in this area, we sought to test the following research question: RQ1: For readers of online articles from unfamiliar sources, what is the relationship between a prior warning about the existence of science misinformation and participants' ability to correctly identify science misinformation as true or false?

News design and information processing
Despite the recent rise in research on way to combat the spread and influence of misinformation, little work has been done to identify factors in the format and appearance of fake news articles on which consumers base their initial determination that an article is "real" or "fake." While fake news articles generally borrow a number of conventions of online news sites, including page layout, typographic salience cues, headlines, images, and links to other articles, the extent to which these design elements shape consumers' classification of these articles are true or false has not been studied in detail. One study asked news readers to read a one article from a fictitious news and one from a familiar established source and to report what factors they believe influenced their perceptions of the credibility (Keib & Wojdynski, 2017). The most frequent answers to an open-ended measure named spelling and grammar issues, the appearance and relevance of images on the page, and the quality and type of the advertisements on the page as indicators of reputability.
Much like native advertising (e.g., Wojdynski & Golan, 2016), false news publishers seek to gain credibility to their message by borrowing design elements from real news stories with the goal of lowering consumers' defensive processing. By doing so, they rely on users' existing mental models, or schemata of what news information looks like. Schema theory, first outlined by Bartlett (1932), focuses on the existence, and persistence, of mental models of information in the mind of individuals, which shape the way they process new information and interact with the world. Individuals often process information using conceptual processing (Rumelhart, 1984), which involves retrieving a schema, or mental model, from past experience if it is has the possibility of making it easier to interpret a new situation or stimulus. These schema can represent concrete knowledge or abstract knowledge (Rumelhart & Ortony, 1977). It is this kind of schema that false news purveyors are trying to leverage by presenting their information with the hallmarks of a news story --headline, byline, single-column text formatting, use of news structures such as inverted pyramid style and interpolation of quotes from sources, and presentation of links to other "articles" alongside the main message. False news publishers hope that by triggering news schema, and news processing patterns on the part of the reader, the reader will be more likely to approach the article with the assumptions she brings to reading news, rather than those she typically brings when exposed to a persuasive attempt.
Recent scholarship has shown that cloaking advertising content in the guise of news leads consumers to deactivate advertising schema, and decreases their likelihood of counterarguing the message (Kim & Hancock, 2017). While less work has been done on studying audiences' recognition or application of specific schemata for news, van Dijk (1983) outlined several aspects of news article organization as part of textual news schema, including the hierarchical organization of articles. However, news design elements such as headlines, copy text, and absence of a sponsor logo (Armstrong, Gurol & Russ, 1980;Cameron & Ju-Pak, 2000) can serve as an important identifier of content as journalism and not persuasive content, which can lead readers to apply their mental models of the former, not the latter, to the content. Based on existing research on news article viewing (Gibbs & Bernas, 2009), the present research operationalized elements that may signify that an article is news as the presence of a byline, datestamp, and headline, and sought to examine the role attention to these elements played in evaluation and classification of news articles from unfamiliar sources. Our study of the role of these elements was driven by the following research questions: RQ2: For readers of online articles from unfamiliar sources, to what extent do participants pay attention to article identifier elements (byline, datestamp, headline) when viewing news articles from an unfamiliar source?

Overview
In a two (warning: present-absent) x two (article veracity: real/fake) mixed-factorial design, participants were asked to view and evaluate four science articles online. Warning was manipulated as a betweensubjects variable. Participants were randomly assigned to either receive or not receive a textual forewarning reminding them that fake science news is prevalent on the internet before browsing the 4 articles. Article veracity was a within-subject variable, with each subject being exposed to two true stories and two fake news stories, presented in a random order. Each participant was asked to read the four articles as they would on their own and allowed to progress to the next article at their own pace, while the location and duration of their gaze was recorded unobtrusively by eye-tracking hardware. Once participants had finished reading the articles, they completed dependent measures and demographic information, were debriefed about the purposes of the study, and were dismissed.

Participants
Forty-nine university students (undergraduate and graduate) at a large public U.S. university were recruited from a student research participation pool and given class credit in exchange for participating in this study. Mean participant age was 20.5 years old. Roughly two-thirds of participants were women (69.4 percent), and the racial breakdown of participants was 75.5 percent White, 14.3 percent Black or African American, and 10.2 percent Asian. This is relatively consistent with the U.S. Census (2018) racial demographic breakdown of the United States (White 76.6 percent, Black or African American 13.4 percent, Asian 5.8 percent). Participants varied in their self reported typical online news consumption habits, with 30.2 percent claiming that they consume online news daily, 24.5 percent claimed three or four times per week, 16.3 percent claimed about twice a week, 20.4 percent claimed once a week, and 8.2 percent said they never consumed online news.

Procedure
Participants arrived at their scheduled appointment time to a campus research laboratory which featured a desk with a desktop monitor, keyboard, and mouse, and a separate researcher desk. The monitor was equipped with a Tobii X2-60 eye tracker mounted below the screen. After providing written informed consent, participants took part in a brief exercise to calibrate the eye tracker to their eyes. All data was collected in December 2017.
After the eye-tracker was successfully calibrated, participants received the between-subjects manipulation in the form of oral instructions from the researcher. Participants in the no-warning condition were told: "Now, we would like you to view four short online articles. Please read them as you think you would if you encountered them on your own. Once you've finished an article, you may use the F10 button on your keyboard to advance to the next article. When you have finished all four articles, please let the researcher know, and we will take you to some questions about them." Participants in the warning condition received the same instructions, but were also read the following statement prior to the researcher loading the first article on screen, "Although a lot online political misinformation has received a lot of coverage in the past year, "fake news" about science-related topics reaches far more readers each year." The order in which participants viewed the four articles was randomly generated for each participant to minimize studywide order effects.

Stimulus Materials
Each participant viewed the same four articles, which were presented in a randomly generated order in each session. All four were published articles, shown live from their original source URLs. The articles ranged in length from 412 words to 582 words. Two articles were chosen to represent false content from disreputable publishers, and two articles were chosen to represent real content from real publishers that would likely not be familiar to participants.
The first false science news article was entitled "Amateur Divers Find Long-Lost Nuclear Warhead," published by the site WorldNewsDailyReport.com. The 582-word article, which included two images, told a story of a vacationing couple who allegedly found a suspicious object on vacation while diving that turned out to be a U.S. hydrogen bomb lost during the 1950s.
The other false science news article was entitled, "Devastating Snowfall Predicted for NW Montana in Early 2018," published by the site React365.com. The article purported to describe predictions from a NASA climate research lab that the Northwest Corridor, and in particular the state of Montana, would receive anomalously high snowfall in 2018. The article included a quote from a NASA analyst, and included information about emergency evacuation procedures.
The first true science news article was titled "The FDA Says MDMA is a 'breakthrough' drug for PTSD patients," republished by RawStory.com with permission from Popular Science. The article included a date (within four months of data collection) and author byline. The 748-word article summarized a recent scientific discovery about potential treatment uses of the drug commonly known as "ecstasy", including links to the scientific journal in which the research appeared and several links to articles about earlier, related research.
The second true science news article was titled "Skin pigment could power new implantable battery," and was published by New Scientist. The 485-word article described some ongoing research into the development of a battery powered by melanin.

Measures
Perceived credibility was measured using items from existing scales (Gaziano and McGrath 1986;Appelman and Sundar 2016). Participants rated their agreement with six potential credibility perceptions that could be made about the article: (e.g., "This article told the whole story;" "I found this article to be believable") on a scale of one to seven (1 = Strongly disagree, 7 = strongly agree). The items proved internally consistent, with Cronbach's alpha values for each of the four stories ranging between .899 and .930. These six items measuring the credibility of each story were then averaged to form one overall measure of credibility for each story.
Article classification was measured by asking the following question: "Out of the articles you read, did you think that any of them were completely untrue, or "fake"? Participants saw a short stem describing each of the articles (e.g., "the article about the implantable battery"), and chose either "Yes", "No", or "Maybe" for each of the four articles. The raw distributions of these judgments, per story, by warning condition, can be seen in Table 1. In order to gauge the impact of warnings and visual attention to display characteristics of ability to correctly classify these stories, the raw detection variables were recoded into four new variables. Fake news detection was computed by summing the "yes" responses to the two false news stories; real news detection was computed by summing the "no" responses to the two true news stories; fake news misclassification was computed by summing the "no" responses to the two false news stories; and real news misclassification was computed by summing the "yes" responses to the two true news stories. These variables were rescaled from their initial 0-to-2 scale to a 0-to-10 scale.
Visual attention variables. Several visual attention variables were calculated, each using a similar process to calculate the data. Each visual attention variable was based on participants' total visit duration, or dwell time, within designated areas of interest. Total visit duration reflects the sum, in seconds, to the nearest hundredth, of participants' gaze "visits" within a given area. Each visit consists of the time from a participant's first new gaze fixation within an area to the first fixation outside of that area. Rectangular areas of interest were defined in the eye-tracking analysis software to encompass the selected content with a padding area of an additional 5 pixels added to all four sides of the rectangle to account for peripheral attention and calibration errors.
For attention to news identifier elements, total fixation duration for areas of interest covering the headline, byline, and publication date were summed. For attention to internal links and external links, attention to areas of interest containing these elements was summed.

Results
RQ1 sought to examine the effect of forewarning on classification of fake news stories. A series of independentsamples t-tests (see Figure 1) showed that participants receiving the warning had significantly increased false news detection (M= 3.54, SD = 3.12) compared to those who did not (M= 0.63, SD = 1.69), t(35.41) = -4.03, p <.001. The warning also significantly decreased fake news misclassification, with warned participants classifying false stories as true less often (M= 3.30, SD = 3.81) than those who were not warned (M= 6.46, SD = 3.45), t(46) = 2.98, p <.01. With regard to real stories, there was no effect of the warning on participants' ability to correctly identify real news stories, t(41.04) = 1.41, ns. However, warned participants were more likely to misclassify true news stories as false ones, (M= 3.30, SD = 3.19) that unwarned participants (M= 1.67, SD = 2.41), t(46) = -2.05, p <.05.
To further examine the effects of forewarning on story classification, we sought to test whether the forewarning had different effects on the classification of credibility at the story level for the fake news stories and for real news stories. An independent-samples t-test showed that participants who received the fake news warnings were less likely to classify the snowfall story as true (M = 3.63, SD = 1.25) than those who were not warned (M = 4.79, SD = 0.95), t(46) = 3.64. The same effect held for the nuclear bomb story, with warned readers rating it as less credible (M = 4.34, SD = 1.34) than unwarned participants (M = 5.05, SD = 1.04). There were no significant differences in users' credibility ratings for the true news stories. Thus H1 was supported.
RQ2 sought to examine the extent to which readers paid attention to several categories of design cues on the page: identifier cues (byline and timestamps), internal links to other articles on the site, and sponsored advertising widgets containing links to external content. As shown in Table 2a, a majority of participants spent some time viewing identifier cues, internal links, and external links on each of the four story pages. Across all four articles, attention to identifier elements was less likely (in terms of percentage of respondents viewing) and shorter in duration than attention to external article links or internal article links (see Tables 1a and 1b). Figure 1. Effects of Warning on Accurate Classification of Article Veracity. Note: All variables are on a 0-10 scale, in which 0 represents zero articles in the category, 5 represents one article, and 10 represents both articles. The responses to corresponding categories (e.g., detected fake and misclassified fake) are not inverse of each other, due to "Maybe" responses being excluded from both categories.  RQ3 sought to examine the relationship between visual attention to design elements and ability to detect fake news articles. To examine this, two binary logistic regressions were conducted, to predict correct recognition of each of the false news stories as fake. Prior to analysis, recognition of these stories was recoded as a dichotomous variable, with "Maybe" responses coded as representing lack of recognition of the story as false. Overall, 12.2 percent (n=6) of participants correctly identified the Nuclear story as false, and 28.6 percent (n=14) recognized the Montana story as false.
To examine the role of visual attention to design elements in recognition of the nuclear story, a binary logistic regression was conducted with dichotomous recognition of the story as false as the outcome variable, and attention to identifier elements, internal links, and external links as predictors. The overall model was not a significant predictor, X 2 (3)=5.47, ns, Nagelkerke R 2 = 20.1. Additionally none of the three variables were significant predictions of article classification. Attention to internal links approached statistical significance, b= .371, Wald = 2.85, Exp(B)=1.45, p <.10.
A similar binary logistic regression measured the influence of visual attention to design elements on recognition of the snowfall story as false. The overall model was significant, X 2 (3) = 23.7, p <.001, Nagelkerke R 2 = .550. The results showed that both visual attention to identifier elements b= -4.37, Wald = 4.39, p < .05, and visual attention to internal links b= 2.03, Wald= 5.50, p <.05, were significant predictors of correct identification of the article as false news. Specifically, participants were more likely to correctly identify the article as false news if they paid less attention to article identifier elements (log odds = .013) and more attention to internal links to other articles on the site (log odds = 7.60).

Discussion
The findings of this study shed some light on the processes through which news consumers view and evaluate news, real and false, from unfamiliar sources, although they also raise several questions that should be addressed by ongoing research in this critical area of scholarship. Of greatest import, of course, is that a simple short warning that simply makes the existence of science misinformation more salient can have strong effects on users' ability to identify "fake" news stories. Across the two fake news stories, readers who received the warning were more than 3 times as likely to classify the stories as fake. This effect extended beyond the simple dichotomous classification measure, as well, to participants ratings of article credibility; while receipt of the warning led to significantly lower perceptions of credibility for the two false news articles, it did not have the same effect on the real news articles.
While the study also sought to provide insight into the processes by which consumers read and evaluate the veracity of article-based stories, the results are a bit less clear. Detailed analysis of the relationship between attention to specific visual elements in the two false news stories and participants' classification of their veracity showed differences between the two stories. These findings may be explained, in part, by the low number of users categorizing the "nuclear" story as fake; having only 12 percent of users successfully classifying the story, it was harder for specific factors to explain variance in likelihood of correct identification. On the other hand, for the "snowfall" story, which had a more symmetric distribution of classifications, attention to article identifier elements significantly predicted correct classification, as did attention to internal story links.
The visual attention findings suggest that participants who apply greater scrutiny to evaluating design features of articles may be more likely to correctly classify them. This has implications for the design of interventions to curb the influence of misinformation; while recently the idea that improving digital media literacy will be effective against "fake news" has seen some backlash, these findings suggest that there may be specific evaluation skills that, if taught to consumers, will help them avoid being duped by misinformation. Future studies in this area would do well to consider the role non-article-specific elements on the page, such as the headline links to other articles published on the same site, play in readers' evaluations of the site's credibility.
The overarching knowledge and fluency the use of information encountered in digital environments is known as metaliteracy, a concept combines the concepts of media literacy, digital literacy, visual literacy, cyber literacy and information fluency (Mackey & Jacobson, 2011). Metaliteracy is conceived as a self referential framework that can be applied to the way people process information in digital environments. It was conceived as a way to unify the many kinds of literacies one must possess in order to effectively produce and share information in participatory environments (Mackey & Jacobson, 2011), such as social media platforms.
Although metaliteracy does not focus specifically on misinformation and does not explain the findings of the present study; rather it provides a lens through which scholarship regarding contemporary fake news can be viewed. The bodies of knowledge about how message elements are produced and used to persuade combined under its umbrella relate directly to the technological, cognitive and communication-based aspects of misinformation in the digital age. Metaliteracy provides several opportunities to conceptually organize the study of the spread and subsequent effects of misinformation, specifically as it relates to the visual elements of digital news content.
The present research has several limitations that govern its generalizability, and which should be addressed in future research on the detection of article-style misinformation. First, the present sample represents only a narrow slice of online news consumers, in terms of age, educational background, and gender. It may be the case that in the online realm, university students respond to different visual cues than might an older sample of a diverse educational background. That said, if the students' education status led to enhanced correct classification stories, the results are pretty discouraging, given the low recognition numbers here. Future research would be well served to not only examine these findings with an older population, but to identify and measure specific individual differences in knowledge and news consumption habits that may influence accuracy of identifying misinformation.
A second category of limitations comes from the stimuli used in the study. While we attempted to provide some stimulus sampling, constraints in terms of demands on our research participants limited us to four articles, with only two representing each of the false/true categories. As such, specific characteristics of these articles may have influenced the results to a greater degree than they would in a study that utilized a larger sample of stimuli. Likewise, all four of the articles in the present study avoided controversial science issues such as the safety of GMO foods, or the role of human actions in shaping climate change. These topics, on which many people hold strong opinions, are a ripe source of clicks for misinformation purveyors, and bring to bear additional issues, like the role of motivated reasoning in false news classification.
In conclusion, we hope that this study can serve as a building block for future work evaluating the role of design in the recognition of evaluation. Given the ease with which news design conventions and distribution channels can be weaponized, much additional work is needed to equip consumers and platform designers with the tools then need to minimize the chances that consumers will be deceived by online misinformation.