Introduction

The emergence of social networking sites (SNSs; e.g., Facebook) and the wide availability of user-provided content online has influenced the way people become informed about sociopolitical and health-related issues and, in turn, form their opinions, world views and narratives. This disintermediated environment has revolutionized the availability of information and facilitated a rapid and effective spread of misinformation (Brennen et al., 2020), which diffuses faster and reaches broader audiences than correct information and fact-checks (del Vicario et al., 2016; Vosoughi et al., 2018). Facebook’s fact-checking efforts, for example, did little to prevent COVID-19 conspiracies from being shared widely in private groups on the platform (Scott, 2020). Unfortunately, COVID-19 misinformation has proliferated, especially online (e.g., mainstream social media), with examples ranging from the propagation of damaging health advice, such as ingesting bleach and coconut oil killing the virus, to false conspiracy theories that the virus was bioengineered.

Amid a global pandemic, COVID-19 misinformation spreading on SNSs has already led to adverse consequences, including decreased adherence to preventative health behaviors and vaccine hesitancy due to false information and quickly spreading conspiracy theories (e.g., 5G cellular service technology is linked to the cause of COVID-19, the vaccine contains a government-controlled microchip; Roozenbeek et al., 2020). Misinformation about the virus has also been linked to mass poisonings, mob attacks (Depoux et al., 2020), and vandalism (Spring, 2020). For example, false conspiracy theories about 5G masts causing or exacerbating COVID-19 symptoms resulted in people setting fire to over 50 phone masts in the UK (BBC News, 2020), which aligns with findings that belief in the 5G conspiracy is linked to violent intentions (Jolley & Paterson, 2020).

Importantly, older adults are susceptible to misinformation (Grinberg et al., 2019; Allen et al., 2020; Guess et al., 2019) and at an increased risk for COVID-19 complications (World Health Organization, 2020). For example, Guess et al. (2019) found that being older than 65 was the strongest predictor of sharing fake political news online. However, recent studies and reports focusing on COVID-19 misinformation find the opposite pattern of results. For example, Roozenbeek et al. (2020) explored susceptibility to COVID-19 misinformation in five countries worldwide, including the UK, Ireland, Spain, the US, and Mexico. The authors found that being older was significantly associated with lower susceptibility to misinformation in all countries except Mexico. It was reasoned that older adults might be allocating more cognitive resources to evaluate the truthfulness of COVID-19-related information due to their vulnerability to disease-related complications. However, it may be possible that even if older individuals are less susceptible to COVID-19 misinformation, they still share more fake news for motivations and reasons other than accuracy (e.g., inattention, political gain, and social consensus), and no studies to date have examined online sharing behaviors for COVID-19 content in younger versus older adults.

COVID-19 misinformation may cause the public to turn to harmful remedies and possibly either overreact (e.g., hoarding toilet paper and other goods) or, more dangerously, underreact (e.g., engage in risky behavior and inadvertently spread the virus to vulnerable populations; Jolley & Paterson, 2020). Consequently, it is crucial to understand what factors may serve as antecedents to people’s belief in false information and, in turn, people’s willingness to disseminate misinformation through the internet (e.g., SNSs).

Several studies to date have begun to explore systematic and individual factors involved in the susceptibility and proliferation of COVID-19 misinformation online, including focusing on the “inattention account,” which suggests the idea that sharing accuracy may be overshadowed by other (often social) motives in the context of social media sharing (Brady et al., 2020; Kümpel et al., 2015). In other words, external motives such as the desire to attract and please followers/friends (Marwick & Boyd, 2011), signal one’s group membership (Donath & Boyd, 2004), or engage with emotionally or morally evocative content (Brady et al., 2017) may distract people from attending to headlines’ veracity when deciding what to share. Thus, even people with high regard for the truth may share inaccurate headlines because they fail to consider accuracy when making decisions about sharing (Pennycook et al., 2018, 2020, 2021; Van Bavel et al., 2020).

In addition to age, relevant demographic and psychosocial individual factors that are associated with higher susceptibility to misinformation include gender, ethnicity, education, cognition, and health literacy (e.g., Freeman et al., 2020; Goertzel, 1994; Roozenbeek et al., 2020; Schaeffer, 2020; Van Prooijen et al., 2018). For example, there is a growing literature reporting that education (e.g., Georgiou et al., 2020; van Prooijen, 2017) and both basic (e.g., numeracy skills; Roozenbeek et al., 2020) and higher order (e.g., analytical thinking) aspects of health literacy all play important roles in processing misinformation (e.g., Bago et al., 2020; Bronstein et al., 2019; De Keersmaecker et al., 2017; Guess et al., 2019; Kahan et al., 2012). Research also suggests that health literacy relates to COVID-19 knowledge (e.g., symptoms, risks), information-seeking skills, prevention intentions, and prevention behaviors (Babicz et al., 2021).

Likewise, cognition may be an especially relevant antecedent for susceptibility and sharing COVID-19 misinformation. For example, false memories induced by a misinformation paradigm have been negatively correlated with measures of intelligence, perception, memory, and face judgment (Zhu et al., 2010). Further, previous cognition research has theorized that misinformation susceptibility is associated with memory retrieval failure (Ecker et al., 2010; Swire et al., 2017). For example, misinformation effects can result from source confusion or misattribution (Johnson et al., 1993). Additionally, misinformation effects can stem from a failure of strategic monitoring processes, such as recollecting the information’s contextual details (e.g., Brown, 2006; Yonelinas & Jacoby, 2012; Zimmer & Ecker, 2010). Thus, it is reasonable to assume that neurocognition may play an important role in the likelihood of sharing COVID-19 misinformation online. Neurocognitive processes may be especially important given recent findings of associations between neurocognitive ability and COVID-19 knowledge (i.e., knowledge of the symptoms, preventative measures, and associated health factors related to COVID-19) after controlling for education, estimated verbal IQ, and health literacy (Babicz et al., 2021).

The growing body of literature on the proliferation of COVID-19 misinformation online has begun to explore important questions, including individual differences in misinformation susceptibility, how misinformation spreads in online social networks, and which interventions can help to boost psychological immunity to misinformation (e.g., see van der Linden, 2022 for review). However, although both scientific and public interests in misinformation about COVID-19 are at a peak, no study has systematically examined this concept in vulnerable populations, such as older adults who are at a much greater risk of requiring hospitalization or mortality after a diagnosis of COVID-19 (World Health Organization, 2020) and may also be more susceptible to misinformation (Grinberg et al., 2019; Allen et al., 2020; Guess et al., 2019). The present study aimed to investigate possible age differences in COVID-19 headline accuracy discernment and online sharing of COVID-19 misinformation in older and younger adults. Moreover, we aimed to explore potential antecedents to the likelihood of sharing misinformation online, including global cognition, health literacy, numeracy, and verbal IQ. Lastly, we aimed to examine whether the likelihood of sharing COVID-19 misinformation on social media would be associated with COVID-19 knowledge.

Method

Participants

Recruitment and eligibility

The study was conducted in compliance with the Institutional Review Board of University of Houston, and the data were gathered between August 9, 2021, and September 17, 2021. A total of 185 persons from 31 states across the U.S. were recruited via word-of-mouth and postings on social media, 102 of whom (77.5%) completed the study procedures (see Fig. 1). Study characteristics for the final study sample (N = 102) are displayed in Table 1. Among eligible participants, completion rates by race/ethnicity were: Asian = 78%, Black = 50%, Hispanic = 25%, and White = 57%. Completion rates were higher among Asians and Whites compared to Hispanics, but here were no significant differences in age, sex, education, or the number of medical comorbidities (ps > .05). Interested participants completed an online screening survey, providing digital, informed consent and confirming that they were: 1) aged 18 to 35 years or 50 or older; 2) minimally proficient in English; 3) in the United States; 4) reported use of at least one social media platform at least one time per week and at least 1.5 hours per week and 5) not diagnosed with any major neurological (e.g., seizure disorder) or psychiatric (e.g., psychosis) conditions. The current research presents no more than Minimal Risk of harm to subjects – the potential risks to participating are mild fatigue and frustration with the standard clinical tests of cognition. Participants are informed in the consent process that they may discontinue at any time without penalty and are provided contact information of the investigators, the university IRB, and a mental health hotline number if any part of the study is distressing to them.

Fig. 1
figure 1

Study flow diagram

Table 1 Sociodemographic information, psychological factors and primary outcome measures for younger and older adults

Study design

Given the limited resources for a dissertation project, we used a cross-sectional discrepant age-group design to maximize our power to detect age-related differences. Such designs are commonly used in studies of cognitive aging. Participants were recruited into either a younger (age 18–35 years) or older (age 50+ years) study group. The inclusion of younger individuals up to age 35 years in the younger group allowed us to reach beyond college-aged adults and diversify the educational attainment of the sample. The inclusion of older adults as young as age 50 was in recognition of emerging data showing that cognitive aging may be present during middle age (Lindenberger, 2014) and aligns with initiatives focusing on brain health in mid-life (e.g., Cognitive Health and Older Adults n.d.).

Materials and procedure

Social media headline-sharing experiment

The current study is a planned secondary analysis of data previously reported by Matchanova (2023). The original study included an experiment that was modeled after the “News-sharing task” by Pennycook et al. (2020) and involved a simple attention manipulation at the start of the task (i.e., judging the accuracy of a non-COVID-19-related headline). Randomization was stratified by age group, with all participants randomly assigned to a control condition (n = 50) or an experimental condition (n = 52). As reported in Matchanova (2023), there was no effect of the study condition on sharing intentions for either younger or older adults (ps > .05) and these null findings were accompanied by small effect sizes. As such, the current study collapsed participants across study condition groups and focused on age effects.

Participants completed the Social Media headline-sharing task, in which they viewed 15 false and 15 true news headlines relating to COVID-19 in random order. The headlines were presented in the format of Facebook posts: a picture accompanied by a headline and a lead sentence. Each participant was asked about their likelihood of sharing each of the headlines on social media: “If you were to see the above on social media, how likely would you be to share it? (i.e., through a status update, direct messaging a friend, Facebook group, text, tweet, etc.)” with answer choices provided on a 6-point scale from 1 (extremely unlikely) to 6 (extremely likely). The primary outcome was the continuous summed scores of sharing likelihood for accurate (Cronbach’s α = .95) and inaccurate headlines (Cronbach’s α = .96), separately, with possible scores ranging from 30 to 180 and higher scores indicating a greater likelihood of sharing the headline.

As described by Pennycook et al. (2020), some evidence in support of the validity of this self-report sharing-intentions measure comes from Mosleh et al. (2020). The false headlines were deemed to be false by authoritative sources (e.g., fact-checking sites such as snopes.com and factcheck.org, health experts such as mayoclinic.com, and credible science websites such as www.livescience.com). The true headlines were extracted from reliable, politically neutral mainstream media sources as ranked by a media bias chart (e.g., AP, Reuters, UPI; Otero, 2018). Two research assistants each independently fact-checked the headlines for accuracy. The false and true headlines were matched on reading level and number of words in the headlines to have them be as linguistically matched as possible (e.g., “Coconut oil’s history in destroying viruses, including Coronaviruses” and “Coronavirus poses a tough challenge for economic policymakers.”).

Headline accuracy post-task

After completing the main task and questionnaires, all participants were shown the same 15 false and 15 true news headlines relating to COVID-19 as in the main task and asked to rate the accuracy of each headline. Participants were asked: “To the best of your knowledge, is the above headline accurate?” and were given the following response options: ‘yes’ or ‘no’. Possible scores ranged from 0 to 30; scores in the current sample ranged from 11 to 27 (Cronbach’s α = .60).

Attitudes on the importance of accuracy in sharing decisions

In line with the methodology used by Pennycook et al. (2020) in study 2, participants were asked the following question after completion of the main task: “When deciding whether to share a piece of content on social media, how important is it to you that the content is...”. They were provided with a response grid in which the columns were labeled ‘not at all, ‘slightly’, ‘moderately’, ‘very’, and ‘extremely’, and the rows were labeled ‘accurate’, ‘surprising’, ‘interesting’, ‘aligned with your politics’ and ‘funny’ (see Pennycook et al., 2021). Items on this block of questions were treated as individual responses (see Fig. 2). Of note, accuracy was rated as the most important factor in sharing decisions, with 93% of the sample reporting that when deciding whether to share a piece of content on social media, it is very or extremely important that the content is accurate. Only 2 participants rated the importance of accuracy as “slightly important,” and no participants rated the importance of accuracy as “not at all.”

Fig. 2
figure 2

Attitudes Toward Sharing COVID-19 Information of SNSs

Neuropsychological assessment

The participants completed the telephone-based neuropsychological test battery detailed in Matchanova et al. (2021), which has shown evidence of reliability and validity (e.g., Babicz et al., 2021; Thompson et al., in press). Memory was assessed with the Delayed Recall and Recognition Discrimination Index from the Hopkins Verbal Learning Test-Revised (HVLT-R; Brandt & Benedict, 2001) and a four-target, embedded, focal event-based prospective memory task (Beaver & Schmitter-Edgecombe, 2017). Attention was assessed with the Digit Span subtests of the Wechsler Adult Intelligence Scale—Fourth Edition (WAIS-IV; Wechsler, 2008) and Trial 1 of the HVLT-R. Executive functions were assessed with action (verb) fluency (Piatt et al., 1999; Woods et al., 2005), Category Switching from the Delis-Kaplan Executive Functions Scale (D-KEFS; Delis et al., 2001), and Part B of the Oral Trail Making Test (Mrazik et al., 2010; Ricker & Axelrod, 1994). A global composite was constructed from the normatively adjusted z-scores derived from these measures (α = .732). Participants also completed the Information subtest of the WAIS-IV, which assesses general fund of knowledge (Wechsler, 2008).

Health literacy assessment

Numeracy

Participants completed two measures of numeracy. The seven-item Expanded Numeracy Scale (Lipkus et al., 2001) queried participants about health-related percentages and proportions (sample range = 0–7). The Arithmetic subtest of the WAIS-IV assesses general mental arithmetic (sample range = 8–22). The Arithmetic and ENS raw scores were strongly correlated (r = .55) and were composited using an average sample-based z-score.

Self-efficacy

Participants also completed two questionnaires on health literacy. On the Brief Health Literacy Screening Tool (Chew et al., 2008), participants rated the extent to which they agreed with three statements about their health literacy on a five-point scale. Higher scores (sample range 7–15) indicate higher perceived health literacy (α = .739). On the eight-item Electronic Health Literacy Scale (Norman & Skinner, 2006), participants rated their knowledge and perceived skills at finding, evaluating, and applying electronic health information on a five-point scale. Higher scores (range = 8–40) indicate greater perceived efficacy using the Internet for health-related purposes (α = .884). The total scores were adequately correlated (r = .43) and were composited using an average sample-based z-score.

COVID-19 assessments

Participants completed several measures of COVID-19 knowledge and prevention, which were drawn from Babicz et al. (2021). First, participants completed a 16-item yes/no measure assessing the extent to which they used scientific quality-based strategies (e.g., “Check that the domain name includes ‘.gov,’ ‘.org,’ or ‘.edu.” when seeking out information on COVID-19 on the internet (alpha = .805). Second, we calculated a COVID-19 knowledge composite from sample-based z-scores derived from participants’ performance on a 12-item general COVID-19 knowledge questionnaire and two measures of participants’ free recall of current CDC information related to COVID-19 symptoms and prevention behaviors. Note that prior work supports a single-factor structure of these COVID-19 knowledge measures (Babicz et al., 2021). Third, participants completed eight items indicating their level of intention to adhere to current CDC-recommended COVID-19 prevention measures (e.g., “You intend to follow the preventative guidelines in the next few weeks”). Finally, participants completed three items indicating their adherence to CDC-recommended COVID-19 prevention behaviors (e.g., “You avoid crowds and poorly ventilated spaces”).

Other study assessments

Participants completed a demographic questionnaire to gather information about sex, age, race/ethnicity, and education level. They also completed two measures of current affect. The 5-item Geriatric Anxiety Inventory-Short Form (Byrne & Pachana, 2011) was assessed anxiety symptoms on a 5-point scale (α = .730). A yes/no version of the 7-item Geriatric Depression Scale (Broekman et al., 2011) was used to assess current symptoms of depression (α = .706). Social media/networking site use was measured utilizing a modified version of the Social Network Sites (SNSs) Usage Questionnaire by Shi et al. (2014), which includes 13 questions regarding frequency of use and sharing behaviors on SNSs. General Internet use was measured utilizing an approach outlined and supported by Baggio et al., 2017. In the present study, participants were asked three questions related to how often they used the Internet in the previous 30 days, how much time they spent on the Internet on an average weekday, and how much time they spent on the Internet on an average weekend day.

Data analyses

Prior to conducting analyses, visual inspection and screening of the data were used to ensure accuracy and identify outliers and other abnormal data points (van Den Broeck et al., 2005). All missing value, correlation and MANOVA analyses were conducted using JMP (version 16). Determination of the appropriate sample size for each proposed analysis was performed using G*Power (Faul et al., 2009). Due to the relatively small sample size and the use of multiple outcome measures and covariates in the model, the critical alpha was set to .01 for all statistical analyses to control for Type 1 error. The normality assumption for MANOVA was tested using the Shapiro-Wilk test. The homogeneity of variance assumption of MANOVA was tested using the Brown-Forsyth test (Brown & Forsythe, 1974).

A repeated measures multivariate analysis of variance (MANOVA) test was used to evaluate the main and interactive effects of age and accuracy on group differences in sharing intentions for accurate and false COVID-related information. Given the small sample size and the large number of potential covariates, a data-driven confound model was used to guide covariate selection to avoid over-fitting the final model (e.g., Field-Fote, 2019). Specifically, only variables in Tables 1 and 2 that were significantly and independently related to each of the variables in the model (i.e., age, headline accuracy, and sharing intentions for accurate and inaccurate COVID-related information at a critical alpha of 0.05) were included. We then conducted several planned analyses of the associations between the likelihood of sharing COVID-19 misinformation online and foundational factors (i.e., estimated verbal IQ, global cognition, and numeracy), health literacy, and COVID-19-related prevention knowledge and behaviors.

Table 2 Internet and social media use for younger and older adults

Results

Main and interactive effects of age and accuracy on group differences in sharing intentions for accurate and false COVID-related information

Descriptive statistics for participant performance on the main task are presented in Table 1. Table 3 presents results for a repeated-measures MANOVA examining sharing intentions for false and accurate information (i.e., two levels; as quantified by a continuous summed score of sharing likelihood for accurate headlines and a continuous summed score of sharing likelihood for inaccurate headlines) as a function of age group (i.e., two levels; younger adults and older adults; between subjects) and headline accuracy (i.e., continuous % accuracy score; between subjects). Gender and ethnicity/race were the only variables in Tables 1 and 2 that met the covariate selection procedures detailed above and thus were included as covariates in the model. There was no main effect of age (p = .099), but a significant interaction between actual COVID-19 headline accuracy and the likelihood of sharing (p < .001; see Table 3), such that accuracy is more strongly related to sharing false headlines (r = −.64) versus true headlines (r = −.43).

Table 3 Repeated measures MANOVA results for age, headline accuracy and sharing intentions for accurate and false COVID related information

Correlates of sharing intentions for false COVID-related information

The full correlation matrix is displayed in Table 4. Among younger adults, a higher likelihood of sharing false COVID-19 headlines was associated with lower verbal IQ, global cognition, self-reported health literacy, numeracy skills and the COVID-19 knowledge composite (rs = −.372--.643; ps < .01). Among older adults, a higher likelihood of sharing false COVID-19 headlines was associated with lower verbal IQ and numeracy skills (rs = −.391--.494; ps < .01). The magnitude of the correlations between sharing false COVID-19 headlines and these other study variables was compared across the younger and older study groups using a Fisher’s r to z transformation. Findings revealed that the the relationship between sharing false COVID-19 headlines and global cognition was statistically stronger (z = −2.75, p < .01) in the younger (rs = −.643) versus the older (rs = −.200) group. No other between group comparison was observed (all ps > .05).

Table 4 Correlates of Sharing Intentions for False COVID-19 Related Information in Younger and Older Adults

Discussion

In the setting of a global pandemic, COVID-19 misinformation proliferating online has led to profound health-related and societal consequences. Debunking misinformation has been largely ineffective because corrections may actually increase the belief in the original misinformation (i.e., familiarity backfire effect; Swire et al., 2017; Lewandowsky et al., 2012). Moreover, fact-checking on SNSs has failed to keep up with the amount of misinformation proliferating online, especially during the pandemic (del Vicario et al., 2016; Vosoughi et al., 2018; Scott, 2020). Thus, other approaches beyond debunking have been explored. One of these approaches has been driven by the “inattention account” and involves subtle prompts that nudge people to consider accuracy. For example, Van Bavel et al. (2020) suggested that SNSs use this preventative approach by periodically asking users to rate the accuracy of randomly selected posts. The current study examined whether headline accuracy judgments are important contributors to sharing false versus accurate COVID-19 headlines on SNSs in both older and younger adults.

In line with our hypotheses, we observed that headline accuracy discernment was negatively associated with sharing likelihood for both accurate and false information at medium to large effect sizes in the full sample. In other words, across both age groups, the participants who are most informed or accurate in discerning true COVID-19-related headlines from those that are false are also the most hesitant in sharing COVID-19 information on SNSs. Notably, accuracy was more strongly related to sharing false headlines as compared to accurate headlines suggesting that accuracy is even more important for spreading misinformation online. This research further highlights an important avenue by which social media fosters the spread of misinformation. In addition to the phenomenon of echo chambers and filter bubbles (Bakshy et al., 2015; Stewart et al., 2019), social media platforms may discourage people from reflecting on accuracy (Goldhaber, 1997). For example, the ‘share’ feature on SNSs such as Facebook hardly requires an active role from the individual wanting to spread information besides a motivation to share it (e.g., Acerbi, 2016). Encouragingly, in line with findings from Pennycook et al. (2021), the current sample of participants rated accuracy as the most important content dimension (i.e., more important than whether the content is surprising, interesting, politically aligned, or funny) when considering whether to share a COVID-19 headline (see Fig. 2). This finding did not differ by age and provided further support for the inattention-based account over the preference-based account of sharing information online (D’Ancona, 2017; Davies, 2016; Hochschild & Einstein, 2016; Keyes, 2004; McIntyre, 2015; Petersen et al., 2018).

The current study also aimed to examine whether older age is an important factor in sharing of COVID-19 misinformation online. Older adults comprise a particularly vulnerable population due to increased risk for COVID-19-related complications, higher susceptibility to misinformation, and fake news dissemination on SNSs (Allen et al., 2020; Grinberg et al., 2019). For example, Guess et al. (2019) found that being older than 65 was the largest predictor of sharing fake political news online. In contrast, a recent report focusing specifically on susceptibility to COVID-19 misinformation in five countries showed that being older was significantly associated with lower susceptibility to misinformation in four of the five countries, including the U.S. (see Roozenbeek et al., 2020). The results from the current study, however, did not show a significant effect of age on the likelihood of sharing false or accurate information online.

There are several possible reasons why the results of this study diverged from the literature to date. First, the current study may not have been optimally powered to detect age effects. Although it is possible that the relatively small sample size may have increased Type II error, the current study was powered to detect medium-to-large effect sizes, which is consistent with literature to date showing age effects in misinformation sharing (e.g., Allen et al., 2020; Grinberg et al., 2019; Roozenbeek et al., 2020). Second, the older adult cut-off age in the present study was set to 50 years old. Much of the literature showing significant age effects in misinformation susceptibility and sharing is focused on adults aged 65 and older. Third, prior studies may not have considered potentially confounding sociodemographic, cognitive, or health literacy factors that were included in this study. Finally, the current sample was largely comprised of white, fairly well-educated women with a preference for the Democratic party, which differs from the samples included in studies focusing on sharing misinformation and may have impacted the findings. As such, future studies should aim to examine these age effects in a larger, more nationally representative lifespan sample that is quota matched to the U.S. population on gender, ethnicity, and years of education.

A second aim of the present study was to explore potential antecedents (i.e., global cognition, health literacy, numeracy, and verbal IQ) to the likelihood of misinformation sharing on SNSs in younger versus older adults. For both older and younger adults, verbal IQ (i.e., general fund of knowledge) and numeracy skills were individually associated with the likelihood of sharing false headlines online. Further, these correlational associations were accompanied by medium to large effect sizes and align with literature showing consistent negative correlations between numeracy skills and misinformation susceptibility (e.g., Roozenbeek et al., 2020). Conceptually, these results suggest that individuals with a lower fund of knowledge and basic numeracy skills may be less accurate in their judgment of headline veracity and more likely to share misinformation on SNSs. In other words, both verbal IQ and numeracy skills may serve as potential overlaid mechanisms of accuracy. Thus, future studies might examine potential mediating effects of accuracy in the relationships between verbal fund of knowledge/numeracy skills and likelihood of sharing misinformation on SNSs. Moreover, future studies may focus on interventions aimed at improving numeracy skills in older adults to help to control the spread of misinformation online.

For younger adults, global cognition and self-reported health literacy were also significantly associated with the likelihood of sharing false information on SNSs. In fact, the association between global cognition and sharing false information among the young adults was large in magnitude and was statistically higher than the small-to-medium correlation that was evident in the older sample. These are important preliminary findings because both variables represent potentially modifiable factors that can be addressed with proper interventions. Further research should explore how cognition and health literacy interventions may impact how (mis)information is received, processed, and shared and how they can be leveraged to improve resilience against misinformation on a societal level. Lastly, among younger adults, a higher likelihood of sharing COVID-19 misinformation on social media was also associated with lower COVID-19 knowledge at medium effect size (Spearman’s rho = −.37). The importance of this finding is two-fold. First, this preliminary univariable association suggests that persons with lower knowledge about COVID-19 symptoms and prevention recommendations are also more likely to share inaccurate or misleading COVID-19 information on SNSs, which is consistent with primary accuracy findings in the present study. Second, in the framework of veridicality, this association in the expected direction provides some support for the ecological validity of the original experiment (see Matchanova 2023 for details).

The various limitations of the current design and sample have been articulated throughout the discussion. In addition, there are several more limitations of the current study that are important to consider. First, the current sample was relatively small and largely comprised of white, fairly well-educated women with a preference for the Democratic party, which limits the external validity and generalizability of findings to persons with different sociodemographic characteristics. Second, the results are also tempered by utilization of a cross-sectional, discrepant age-group design and convenience sampling approach that did not include a middle aged group. Therefore, causal inferences cannot be isolated, and these results are exploratory and correlational. As such, future studies using a longitudinal research design that includes a larger sample of individuals across the lifespan are needed to clarify the interplay between age, accuracy, and functional outcomes, such as information susceptibility and sharing on SNSs. Lastly, the current study utilized this experiment as part of a larger 1.5-hour telephone-based neurocognitive battery. As discussed in Pennycook et al. (2020, 2021), an advantage of this experimental design is that the manipulation is not explicitly linked to the main task and is subtle, which makes demand characteristics or social desirability bias an unlikely driver of any treatment effect. Although numerous steps were taken to minimize these external drivers (i.e., having participants complete this task first and online instead of having the examiner stay on the phone with them), it is still possible that having the examiner call the participant at the start and end of the task resulted in increased risk for demand characteristics or social desirability bias. In other words, the current study may be underestimating the rate of sharing misinformation due to the observer effect as part of the study design.

Despite these limitations, findings from the present study have practical relevance. Taken together, these data suggest that knowledge may be a powerful tool in the fight against misinformation. Across these results, the sharing of false information was lowest among individuals who had higher general funds of knowledge, numerical knowledge, and health knowledge. Therefore, a broad range of efforts to boost health and science education may be valuable in helping to prevent the spread of misinformation. Future studies may also look specifically at the role of science literacy (e.g., science knowledge) as a buffer against spreading misinformation. These data also suggest that studies are needed to specifically understand the psychology of misinformation acceptance and sharing among younger individuals with lower levels of health knowledge. What are the factors that can facilitate accuracy discernment among persons at highest risk for sharing misinformation? For example, perhaps efforts to simplify health information in educational materials, eHealth interventions (i.e., PCs and tablets with videos & interactive self-help tools), and efforts to improve underlying health literacy, such as numeracy skills and science knowledge, can all help to reduce the spread of COVID-19 misinformation online (Berkman et al., 2011; Jacobs et al., 2016).