Unpacking Gender, Age, and Education Knowledge Inequalities: A Systematic Comparison

Objective. Scrutinize how the three main sources of knowledge inequalities, namely, gender, age, and education, relate to the content, format, and object of the survey items used to measure knowledge Methods. Using a pooled data set encompassing 106 postelection surveys in 47 countries from the CSES, we perform analyses by stacking the data at the question level. Results. Questions probing familiarity with electoral and partisan politics provide knowledge gaps of a higher magnitude. However, our balanced comparison of the three gaps also confirms the peculiarities of the gender gap in knowledge previously portrayed by the bulk of the literature. Conclusion. Surveys aspiring to measure citizens’ knowledge about the political world in a valid manner should include items inquiring about different substantive contents, and not only elections or partisan politics as the available postelectoral surveys around the world currently do. They also should use closed-ended format with at least four possible options, and should maximize the object of inquiry, so that the cognitive abilities required to correctly answer the questions are diverse and the measurement does not favor one over the others.

Although political knowledge is a crucial resource shaping citizens' ability to participate effectively in politics, the average citizen portrayed by scholars is poorly informed about political institutions, processes, and policies. Substantiating evidence in this direction now extends beyond the United States to Europe and other advanced industrial democracies (Clark, 2017;Curran, Iyengar, and Lund, 2009;Delli Carpini and Keeter, 1996;Howe, 2006;Fraile and Gómez, 2017). What is more, and central to this article, existing research also concludes that political knowledge is unevenly distributed among citizens: men, middle-aged, and highly educated individuals display greater levels of political knowledge than their female, younger, and lesser educated counterparts in all countries where such questions have been investigated (Burns, Schlozman, and Verba, 2001;Dassonneville and McAllister, 2018;Delli Carpini and Keeter, 1996;Fortin-Rittberger, 2020;Marinova and Anduiza Perea, 2020).
While the gender gap in political knowledge has received the lion's share of the scholarly attention, research has been shallower surrounding both age and educational knowledge gaps. Furthermore, because scholarship has treated these different knowledge gaps in relative isolation from one another, we have not yet been able to establish whether there are commonalities between them. This article seeks to redress this imbalance by investigating how different features of survey items designed to measure political knowledge shape the size of informational gaps typically found around gender, education, and age. More specifically, the article addresses the following research question: Are the three gapsgender, education, and age-evenly affected by features of survey items or do certain features only influence some of them? For one, if the way knowledge is measured solely impacts gender, this would provide the most direct empirical evidence corroborating feminist scholarship's claims that conventional measurements of political knowledge reflect a (too) narrow view of the key features of the political realm, which serves to systematically underestimate the level of knowledge of women in comparison to men (Stolle and Gidengil, 2010). The effects of survey items might not be unique to gender and also impact other collectives such as young citizens with limited political experience and/or those with low levels of education.
To calibrate the extent to which the gender gap in knowledge is distinctive in comparison to the other two sources of knowledge inequalities, we make use of a vast breadth of factual knowledge survey items-encompassing very different domains of the world of politics-by pooling three modules of the Comparative Study of Electoral Systems (CSES): 106 postelection surveys between 1996 and 2011. We perform our analyses at the question level (n = 330 knowledge items) instead of relying on additive indices of factual questions. With this comparative approach, we overcome some of the key limitations of current studies mainly drawing on just a handful of political knowledge items collapsed in indexes that obscure the most interesting variation pertaining to the subject they probe into, their format, and the objects that are queried.
Our analyses reveal that the gender gap in political knowledge shares commonalities with the other sources of knowledge inequality, but also some traits that sets it apart. Questions probing familiarity with electoral and partisan politics and questions with an open format affect all knowledge gaps in the same direction. The gender gap in knowledge, however, presents two peculiarities: first, women provide a larger number of correct answers to items asking about gender-specific matters. And second, there are no gender differences in the likelihood of providing an incorrect answer, independently of three different features of survey items studied here. This finding not only yields concrete substantiation of scholars' claims that traditional measurements of political knowledge are anchored on a narrow idea of politics (Dolan, 2011), but more importantly, that the underestimation of women's levels of knowledge has a distinctive component. Our study is the first to offer such a fine-grained examination of different features of survey items in order to precisely delimit the area that uniquely pertains to gender versus other structural sources of inequality. We discuss the implication of these findings for the study of political knowledge and its measurement in the conclusion section.

How Content, Format, and Object of the Knowledge Items Are Related to the Typical Knowledge Gaps
There is little agreement among scholars about the best way to measure what people know about politics, more specifically on the measurement of "the range of factual information about politics that is stored in the long-term memory" (Delli Carpini and Keeter, 1996:10). This information usually entails knowledge about rules, actors, and the relevant political issues facing countries. The conventional way to assess this sort of knowledge asks survey respondents to answer a handful of questions about the political system. The dominant approach we find in cross-national surveys is factual questions identifying political officeholders, the size of assemblies, or the electoral rules. These questions differ not only in their content but also in their format and the cognitive abilities needed to provide a correct answer.
Despite the hints existing research has put forward, no study to date has systematically scrutinized the extent to which question content, format, and object matter for the main sources of knowledge inequalities: gender, education, and age. 1 We first briefly discuss how the content, format, and object of the knowledge questions encourage the probabilities of providing a correct answer using the suggestions advanced by previous contributions. In a second stage, we examine how these three features can also shape the size of the typical gaps in knowledge (gender, education, and age).
Starting with the content of the knowledge items, scholarship identified certain domains such as international politics as more difficult to understand and retain in the long-term memory. Other spheres such as domestic politics are easier and can be quickly invoked, which in turn reduces the need to appeal to the memory to formulate an answer (Delli Carpini and Keeter, 1996). In general, the odds that respondents will provide a correct answer to a political knowledge item are contingent on the topic of the question. Items pertaining to electoral politics such as parties and the main political actors in government will be easier for citizens, considering that they are salient and covered by the mass media in the context of electoral campaigns. On the flip side, topics that are the object of less attention will be more difficult to retain in the long-term memory and to recall. This is evidenced by a study of panel data in the United Kingdom showing that mean scores for political knowledge are higher immediately following elections and lowest at mid-term (Andersen, Tilley, and Heath, 2005).
With respect to format, there are two main variants: open and closed. Open-ended questions ask the respondent about a given political issue without imposing any limits on the response. The main advantage of open-ended questions is that they minimize the risk of guessing (Luskin and Bullock, 2011). Despite the lower frequency of lucky guessing, these types of questions are more demanding for the respondent. By contrast, in closedended items, the interviewed must choose the answer they believe is correct from a limited number of responses (that can range from two-correct versus incorrect-to four or five different options). This format is less challenging and therefore elicits more correct responses (Mondak, 2000). Yet, it also increases the propensity of respondents to guess (Luskin and Bullock, 2011). This latter tendency is more pronounced if the number of response options is small (Rodriguez, 2005). In short, the chances of providing a correct answer to a political knowledge question are higher when closed-ended questions are used (above all those of true/false [TF] format) in comparison to the open-ended format, but the probability of guessing is also higher.
The third and last distinction we address here concerns the cognitive abilities needed to provide a correct answer to a factual knowledge item. The scarce existing research shows that questions requiring the respondent to remember specific numerical information entail a higher level of difficulty than questions probing for verbal information (Elff, 2009). When asked to recall a fact or an event in surveys, respondents draw upon their declarative memory, referring to those memories that can be consciously recalled or declared (Kandel, Schwartz, and Jessell, 1995). The aptitude of respondents to rely on the declarative memory might depend on the object of the specific question. While some people are advantaged by verbal cognitive styles, others favor visual or numerical ones (Paivio, 1986). In fact, Prior (2014) shows that using visual formats to ask questions increases levels of political knowledge in the United States. The amount of effort required to answer a political knowledge question is higher for items related to specific numerical information than for items related to verbal information, in particular when related to the names of famous political actors who often appear on television and in other mass and social media sources.
To recapitulate, topics and objects abundantly covered by the mass media and T/F format require (on average) a smaller amount of effort from respondents to provide a correct answer. For all three types of survey items, respondents have higher incentives to comb their memory for considerations pertinent to the questions, which in turn increases the prospects of correctly answering these items.
In addition to influencing the probabilities of providing a correct answer in the electorate at large, we conjecture that all three features-content, format, and object-are also related to the size of the typical knowledge gaps a mounting number of researchers have identified. Groups presenting lower levels of knowledge, such as women, the young, or those with low levels of education, and little motivation and/or experience might be affected by these question features (Fortin-Rittberger, 2016;Prior, 2014;Ferrin, Fraile, and García-Albacete, 2018).
While we know that content, format, and object of the knowledge items influence the gender gap (Fortin-Rittberger, 2016Prior, 2014;Ferrin, Fraile, andGarcía-Albacete, 2017, 2018), we know much less about how these features affect other sources of knowledge inequalities such as education or age. If the way knowledge is measured is solely related to one of the three main sources of knowledge inequalities (gender), then we have solid empirical grounds on which to argue that the size of the gender gap has been systematically overestimated by conventional survey items used by the mainstream empirical scholarship. 2 With respect to the content of the knowledge items, research has shown that at least part of the documented gender gap in knowledge is a product of the traditional measures that are typically used in surveys that focus on electoral and partisan politics, tipping the balance in favor of "male" knowledge (Dolan, 2011;Ferrin, Fraile, and García-Albacete, 2018;Fortin-Rittberger, 2016;Stolle and Gidengil, 2010). Following this literature, we expect that the size of the gender gap might be greater when knowledge is measured through items asking for traditional arenas of electoral and partisan politics, a sphere that is often perceived as a men's game. The gap should be narrower when questions focus on topics of direct relevance to women as a group (for instance, female politicians or officeholders, policies that concern women, local politics, etc.; see Barabas, Jerit, and Pollock, 2014;Delli Carpini and Keeter, 1996;Dolan, 2011;Ferrin, Fraile, and García-Albacete, 2018;Kenski and Jamieson, 2000;Rapeli, 2014;Stolle and Gidengil, 2010).
The association between education, age, and political knowledge has been extensively documented (Berggren, 2001;Clark, 2017;Curran, Iyengar, and Lund, 2009;Dassonneville and McAllister, 2018;Delli Carpini and Keeter, 1996;Fraile, 2013;Gordon and Segura, 1997;Grönlund and Milner, 2006). Yet, a sole study has tested the extent to which the size of the education gap is related to the content of the knowledge item we look at (see Barabas, Jerit, and Pollock, 2014). According to Barabas, Jerit, and Pollock (2014), 2 We acknowledge other sources of knowledge inequalities documented in prior studies such as race or income. We focus on gender, education, and age because these are the three variables included in the CSES modules, lending themselves to cross national comparisons. Although income is included in the CSES, nonresponse is widely variable across countries. Besides, income is very much related to the level of education. Finally, race inequalities are context dependent across participating countries and do not lend themselves to cross-country comparisons. Additionally, many countries do not field questions pertaining to race. the education gap in knowledge is wider when the content of the questions refers to institutions and political actors versus more policy-specific knowledge. Based on these insights, we conjecture that education and age gaps in knowledge might be larger for questions related to electoral politics such as parties and the main political actors in government. The rationale is that knowing about these issues hinges on whether citizens are motivated enough to access varied sources of information. Communication scholars have shown that media do not reach all citizens equally; younger and lesser educated citizens are less likely to be exposed to news across all media sources: television, radio, and press (Aalberg, Blekesaune, and Elvestad, 2013;Shehata and Stromback, 2011). Consequently, as the chances of citizens being exposed to various sources decrease, so do the chances of knowing about political actors and parties.
Concerning the format of the knowledge items, existing research reveals that women are more riskaverse and are less prone to guess. This creates an advantage in favor of men when responding to political knowledge items in conventional surveys (Fortin-Rittberger, 2016;Frazer and Macdonald, 2003;Kenski and Jamieson, 2000;Lizotte and Sidman, 2009;Mondak and Anderson, 2004;Ferrin, Fraile, and García-Albacete, 2018). Given the different propensity of women and men to guess, and considering that closed-ended items-predominantly those using the T/F option-are more likely to elicit guessing, the size of the gender gap should be larger in closed-ended items using the T/F option. The same argument can be extended to both the less educated and young. Although overlooked by previous contributions, we argue that people with lower amounts of resources (i.e., young and/or very old and with little formal education) when it comes to the political realm might be more risk averse and less willing to guess, and thus more likely to resort to the "don't know" option than their highly educated and older counterparts. Consequently, we expect the size of both the age and education knowledge gaps to be greater for items using the closed-ended T/F option, where the chances of guessing are at their peak.
Finally, and with respect to the object addressed by each question, the only study that indirectly conjectures a potential expectation regarding its differential association to education and gender is that of Prior (2014), from which we derive our last expectation. Using an experimental design, Prior (2014) shows that the percentage of correct answers increases for women and those with lower education when the easiest format (visual format) is used to measure knowledge. Elff (2009) shows that questions requiring the respondent to remember specific numerical information pose a greater level of difficulty than questions probing for verbal information. From these two studies, we deduce the expectation that the size of the three gaps will be larger for the most difficult questions, in this case those asking for numerical information. In the ensuing section, we describe the research design used to test the extent to which content, format, and object of the knowledge items are related to the size of the typical gaps.

Data and Estimation Strategy
We make use of a vast breadth of factual knowledge survey items-encompassing very different domains of the world of politics-on a broad country coverage by pooling three modules of the Comparative Study of Electoral Systems (CSES): 106 postelection surveys in 51 countries between 1996 and 2011 (The Comparative Study of Electoral Systems, 2003Systems, , 2007Systems, , 2013. The CSES contains three questions per election study measuring political knowledge, each displaying different degrees of difficulty. Given that these items are not standardized across participating election studies, our research design allows us to draw from over 330 different knowledge items, providing enough empirical variation across the categories of our three main independent variables: content, format, and object. We perform analyses by stacking the data at the question level (n = 330 knowledge items). Given that each respondent has answered three questions, this yields a dataset of over 600,000 observations. We do not to include the CSES module 4 (from 2011 to 2016) because this module fielded the same standardized knowledge items in all participating countries, which removes all variation at the question level. This strategy allows us to group knowledge items within each individual respondent. We offer a comprehensive approach spanning a large number of countries and time points that goes much beyond existing studies drawing on single-country research designs and narrow understanding of the domain of politics.

The Dependent Variable: Political Knowledge
We use a trichotomous operationalization of political knowledge. For each question, we have coded correct answers (1), incorrect answers (2), and "don't know" (DK) (3). "Refused" were coded as incorrect answers. We know that in comparison to men, women are less willing to guess when responding to survey questions and are more likely to select the DK option (Kenski and Jamieson, 2000;Lizotte and Sidman, 2009;Mondak andAnderson, 2004: Ferrin, Fraile, andGarcía-Albacete, 2017;Fortin Rittberger, 2016). This coding scheme isolating "DK" allows us to address the issue of "response bias" around the DK category head on. What is more, we can establish if this response bias is exclusive to women, or if it also applies to other groups of respondents, such as those with lower amounts of resources (i.e., young and/or very old and low educated). To the best of our knowledge, no existing study has sought to tackle this issue as directly as we propose in this article. Figure 1 presents the distribution of the responses to all political knowledge items from the three pooled CSES modules. About 55 percent of total responses are correct, 26 percent are incorrect, and 19 percent are DK. Table 1 summarizes the three main question features at the heart of this study, content, format, and object, along with the distribution for each of these items in the pooled data we use (in rounded percentages of the total observations). We have relied on both CSES codebooks, as well as original questionnaires from each election study, to manually code content, format, and object. We discuss the operationalization of each.

Question-Level Covariates
Starting with content, the CSES contains three items per election that are not standardized across countries, meaning that each national election study team is left free to ask these questions according to its own standards. Although the lack of standardization makes direct comparison across countries challenging, the sheer variance in question content yields valuable insights for our research purpose. The items involved in most surveys, including the CSES, are mainly tapping into the traditional understanding of political knowledge, such as identifying political officeholders or officials, verifying the extent of knowledge on key institutions, such as the size of assemblies, details concerning electoral rules, and the presence of thresholds. We have coded questions in seven categories to capture the different dimensions of factual knowledge included in the diverse items, but also to seek to make Frequency of Correct, Incorrect, and DK Answers to CSES Knowledge Items SOURCE: Our elaboration on the first three modules of the Comparative Study of Electoral Systems (2003,2007,2013). more fine-grained distinctions than what have been achieved to date, given the richness of our pooled data. The categorization we employ is based on the contributions of Delli Carpini and Keeter (1996) as well as Rapeli (2013), albeit with a higher degree of differentiation. With this approach, we seek to disentangle the various domains of politics. The features we have sought to unpack are whether knowledge is about individuals, parties, policies, institutions, domestic issues, international issues, or on topics women would relate to.
We have used the following classification to code each item, according to whether the question pertains to (1) the identification of persons (within institutions, i.e., who is the minister of foreign affairs or the leader of a certain party) or their positions (which position this given politician holds); (2) the identification of political parties (i.e., the largest, or second largest, party in government in each country); (3) questions pertaining to the rules of the democratic game such as electoral rules, term length, and history; (4) policy-specific questions at the national level (i.e., what is the largest public spending area, the size of the defense budget); (5) questions asking respondents about political parties' positions on diverse issues; (6) questions including gendered content. For example, identifying a female politician or a policy domain that could be traditionally classified as women's issues (e.g., social services, local politics, and the like); and (7) questions pertaining to international events, other countries, and international organizations.
In most cases, this classification is exhaustive and exclusive so that each item is included in solely one of the seven categories. Yet there are few cases where the same question could be classified either as 1 (identification of a person) or 6 (gendered content). Given the scarce number of questions including gendered content, we classified all questions implying the identification of a women politician as 6 even if the question refers to an individual politician. For example, the identification of the minister of transport in Norway in 2009, items, and open-ended questions. We code them accordingly. Finally, the third feature of the knowledge items our study addresses refers to the object of the questions. Following the sole existing study (Elff, 2009), we have coded each question in three categories, whether the response elicited requires a number or a quantity, proper names, or other words. Table 1 summarizes the distributions of content, format, and object of the knowledge items included in the first three CSES modules between 1996 and 2011. Regarding content, priority has been given to items asking about the rules of the democratic game (40 percent of the total items) and to identify specific political actors (31 percent). Policyspecific and gendered topics are less frequently used in CSES surveys (2 percent and 3 percent of the total items, respectively). Additionally, the most common format is open ended (72 percent of the total items are open ended), while 19 percent are T/F and almost 9 percent MC. Finally, the majority of items prod for verbal information (41 percent to a name and 29 percent to a word), whereas 30 percent of the total items ask for specific numeric information.

Individual-Level Covariates
We draw on the standard predictors of political knowledge outlined in existing research (abilities, motivation, and resources) such as age, education, income, media attentiveness, political interest, and other equivalent indicators. The following analyses only include gender (a dummy variable identifying men with value 1 and women otherwise), age (in years, specified as quadratic as prior contributors suggest), and education (an ordinal variable that goes from no education to university degree) as the typical sources of knowledge inequalities to maximize the number of observations. Unfortunately, the CSES does not contain questions about declared political interest or media exposure common to all three modules. Since we place the focus at the question level, we believe that concentrating on these three main antecedents of knowledge is adequate. We have replicated the same estimations including the following additional independent variables: income, external political efficacy, and self-reported vote (voted versus not) in the preceding elections. Since the results of our key variables of interest at the question level show the same pattern (see Figure 1 in the Supplementary Appendix for the findings of this replication), with or without these controls, we opt for the most parsimonious estimations. 3

Findings
Given that our dependent variable classifies responses as correct (value 1), incorrect (value 2), or DK (value 3), we have estimated a multinomial logit regression, with fixed effects for country and year. 4 The question-level features elaborated in the previous section constitute the main independent variables. We also control for the standard sources of knowledge inequalities according to current state of the art: gender, age, and education. In a first step, we establish whether the content, format, and object of the survey items are systematically related to the probabilities of providing a correct, incorrect, or DK answer to a given knowledge question. After having confirmed this first set of empirical associations, we then move to the most crucial estimation of gendered, education, and age differences. Figure 2 summarizes the main findings of our first estimation (calculated on the base of Table 1 in the Supplementary Appendix). It shows the predicted probabilities of providing a correct, incorrect, or DK answer to a political knowledge item across question features. Figure 2, upper-left corner, focuses on the contents of the knowledge items. As expected, the estimated probabilities of giving a correct response is 0.69 for items asking respondents about their knowledge of topics related to the main parties of their respective political systems. This probability decreases by 14 points (down to 0.55) for items inquiring about key political actors, and even more in the case of items quizzing about rules and institutions of the democratic system or items related to party positions (0.52). Questions about international politics show the lowest predicted probabilities (0.47). Conversely, chances of giving an incorrect response are higher for items related to party positions (0.47), and questions about national policies (0.37). Finally, the highest probabilities of responding DK are for items quizzing about rules and institutions of the democratic system (0.23), and about international politics (0.21), suggesting that those two are the most difficult types of questions for respondents in this pool of countries since they prefer to choose the most sincere option of DK, rather than guessing. While international politics is often considered to be a distant sphere for the average citizen, facts about the rules of the game need to be learned through education, discussions, or exposure to the mass media, and also remembered (Delli Carpini and Keeter, 1996). The upper-right-hand side of Figure 1 displays question formats. As expected, the estimated probabilities of giving a correct response are highest (0.63) for items presenting a closed format with only two options (T/F). These probabilities decrease to 0.57 for items using an MC format and to 0.51 for items displaying an open-ended format. Open-ended items show the highest predicted probabilities of answering incorrectly (0.29) followed very closely by MC items (0.28). By contrast, we find the highest propensity to give a DK answer (0.20) for the T/F items, followed by the open items (0.19). In the case of MC items, respondents tend to provide more incorrect responses than DK, substantiating the claim that respondents have a higher propensity to guess when closed formats are used to measure knowledge (Luskin and Bullock, 2011).
Finally, the bottom-left side of Figure 1 provides prima facie evidence suggesting that items asking for knowledge of the names of political actors (who often appear in television and other mass or social media sources) are associated with larger chances of giving a correct answer (0.62) than items calling for specific numerical information (0.49) or words (0.48). Respondents are more likely to answer incorrectly than pick DK for these last two types of items. The implication is that they might be guessing to a greater extent than in the case of names.
We now turn to the main contribution of this article: the association between these survey characteristics and the typical knowledge gaps. We have replicated the same estimations presented in Table 1 in the Supplementary Appendix adding interaction terms of each of the three variables at the question level (content, format, and object) and gender, education, and age. The results of these additional estimations are provided in Tables 2-4, respectively, in the Supplementary Appendix. Table 2 summarizes the substantive findings for the contents of the knowledge items. The table displays the size of the gaps for gender,  Appendix. * Significant differences between the two predicted probabilities at p < 0.001. education, and age by supplying the difference in predicted probabilities of giving a correct, incorrect, or DK answer between (i) men and women (for the gender gap), (ii) highly and low educated individuals: university versus not completed secondary (education gap), and (iii) 50-and 20-year old individuals (age gap) across different contents of the knowledge items. The educational gaps are broadest: there are educational gaps in all outcome options, that is, correct, incorrect, or DK answers. The size of the gender and age gaps are roughly similar and mostly affect the correct and DK answers. We rarely observe gender and age gaps in incorrect answers. There seems to be a qualitative difference between these three sources of knowledge inequality, setting education apart from the rest. We find significant gender gaps in favor of men across all types of content: the difference between men and women is smallest (0.07) for items involving gender-specific knowledge and double for the remaining items. Supporting prior scholarship (Fraile, 2014;Frazer and Macdonald, 2003;Kenski and Jamieson, 2000), we find trivial gender differences in the predicted probabilities of giving an incorrect answer save for items asking about party position (as previously discussed, partisan politics is perceived as a man's domain) and policy content. The gender gap vanishes for incorrect and DK answers to questions implying gender-specific knowledge, as others have shown (Dolan, 2011;Ferrin, Fraile, and García-Albacete, 2018;Fortin-Rittberger, 2016;Stolle and Gidengil, 2010).
Moving to the next column of Table 2, we find larger gaps in knowledge between low and highly educated respondents (university versus not completed secondary) in all content, and across all answer types. As expected, and following the same pattern as gender, the highest differences in the likelihood of providing a correct answer are for questions related to electoral or partisan politics. The size is smaller but still noteworthy (0.15, which is twice the gender gap) for items involving gender-specific knowledge. The most relevant finding suggesting the particularity of education in comparison to age or gender gaps regards the significant education gaps in incorrect responses across all topics, especially those pertaining to the identification of parties' positions (0.22) and knowledge of policies (0.16), validating that the patterns Barabas, Jerit, and Pollock (2014) uncovered in the United States also appear in other democracies. Education differences for DK answers are in general smaller, and only relevant for items asking about international politics (0.19) and gendered topics (0.18).
The last three columns of Table 2 suggest that the size of the age gap in knowledge between a mature and a young respondent (50 years old vs. 20 years old) is considerable, albeit smaller in magnitude than the education gaps. The sharpest age disparities in the predicted probabilities to give a correct answer are for questions probing familiarity with national policies (0.17). As was the case with the education gap, the size of the age gap for items involving gender-specific knowledge is equally sizeable: 0.15. By contrast, age gaps are only relevant in the chances of giving a DK but not an incorrect answer. As it happens to be the case with women, younger individuals are more likely to answer "DK." Table 3 focuses on the format of the knowledge questions and mirrors pattern found for contents: educational gaps in correct answers are wider than gender or age gaps. While we find education gaps in both incorrect and DK answers, we seldom notice gender and age gaps in incorrect answers. Starting with the gender gap, its size in the chances to provide a correct answer is smaller for MC (0.07) than for open-ended (0.11) or T/F (0.09) formats, as expected. Again, gender differences in providing an incorrect answer are trivial and moderate for the probabilities of offering a DK response. Contrary to our expectations, the size of the education gap in the probabilities to give both a correct and an incorrect answer is largest for open-ended items (0.23 and 0.11, respectively). As anticipated, age differences in the probabilities of giving a correct answer are higher (0.13) for questions presenting a T/F format and trivial for the chances of answering incorrectly. Finally, we find age differences in the probabilities to provide a DK answer for closed-ended items (both TF: 0.10 and MC: 0.09). This last finding reinforces the tendency detected in question content: the group of 20-year olds tends to choose the DK option to a higher extent when they do not know, rather than risk answering erroneously. Table 4 focuses on the object addressed by each question. We find a significant gender gap in the chances of providing a correct answer for all objects, albeit without much differentiation between them. Put more simply, none of the different objects seems to pose an additional difficulty for women. There is no gender gap in the probability of giving a wrong answer, while we once more notice a moderate gender gap in the propensity to choose the DK response option that is similar across all three objects (between 0.08 and 0.06). We find the highest education and age gaps in correct answers to items asking for the names of political actors (0.27 and 0.17, respectively). Showing again the peculiarity of education, we find significant gaps in incorrect answers across all objects (0.09), a gap that is marginal for the cases of gender and age. Names are the most difficult objects for those with lower education and younger citizens, but do not seem to affect women differently than other objects.
Taken together these findings indicate that the specific types of questions used to measure knowledge systematically impact the size of the knowledge gaps we uncover. Are these three gaps equally affected by features of survey items? Our findings are ambivalent. On the one hand, there is a clear pattern across all gaps studied here: we observe the largest knowledge gaps for questions probing familiarity with electoral and partisan politics and using an open format. But there are interesting variations across the three gaps. First, the education gap is largest, and appears across all survey features and responses (correct, incorrect, DK). Second, the object of the questions used to measure knowledge only affects the education and age gaps, but not the gender gap. Finally, and most importantly, the division between the incorrect and DK options reveals the distinctiveness of the gender and age gaps relative to the education gap. No matter the content, format, or object addressed by the political knowledge questions, the gender differences in the likelihood of giving an incorrect answer are always trivial, with a sole exception: items inquiring about party positions and policies, precisely the topics that are often identified as "male" knowledge. This suggests that the underestimation of women's levels of knowledge has a distinctive component. This gendered pattern of no response is also reproduced in the case of 20-year-old respondents. The absence of both a gender and an age gap in incorrect answers implies that it was perhaps premature or simplistic to conclude that women or young people know less about politics than men or older respondents, respectively. By contrast, the interpretation of the education gap is crystal clear: highly educated respondents know more about politics than the less educated. We discuss the implications of these findings in the last section.

Conclusion
This article engages the ongoing debate about the appropriate measurement of a theoretically complex concept, political knowledge, by testing if the content, format, and object of survey items are related to the size of the three typical sources of knowledge inequality documented in prior scholarship: gender, education, and age. Our comparative findings largely serve to confirm and unify the islands of knowledge stemming from individual country studies such as Canada (Stolle and Gidengil, 2010), the Netherlands (Howe, 2006), Spain (Ferrin, Fraile, and García-Albacete, 2018), the United Kingdom (Andersen, Tilley, and Heath, 2005;Frazer and Macdonald, 2003), and the United States (Barabas, Jerit, and Pollock, 2014;Delli Carpini and Keeter, 1996). Our first contribution is to expand existing work by showing that levels of political knowledge are highest when the questions are about party politics and political actors, using a T/F format, and whose object is a name are fielded in surveys, not only in those countries that have been studied to date, but also across 51 countries between 1996 and 2011. Questions about specific policies or international politics, with open-ended formats, and whose objects are numbers, yield (on average) lower levels of political knowledge.
Our second contribution is to provide a comparison of the typical knowledge gaps reported in existing studies: education, gender, and age. On the one hand, we confirm the existence of substantial gender, age, and education differences in the probabilities of giving a correct answer. On the other hand, we cast some doubts on the mainstream interpretation scholarship has advanced for these patterns. The interpretation of educational gaps is unambiguous: those citizens with low levels of education know less about politics than those with high education since they present smaller probabilities to give correct answers and greater probabilities to provide both incorrect and DK answers. The same is not true regarding women and young individuals, where we find a more intricate pattern. We observe negligible gender and age differences in the chances of answering incorrectly, and only moderate gender differences in the likelihood to provide a DK response. The fact that women and younger citizens provide, on average, lower levels of correct answers does not automatically mean that they know less about politics than men or the middle aged. Rather, this suggests that their propensity to provide a response to a survey item depends on other factors than those considered here (content, format, or object of the questions), and that the level of knowledge of men estimated with the percentage of correct answers to factual survey questions might be artificially inflated by the lucky guesses men get correctly by chance, confirming findings from recent scholarship (Fortin-Rittberger, 2020).
We also identify two specific commonalities across the three different gaps: gaps in the propensity to provide a correct answer are largest for questions asking about partisan/electoral politics and using an open-ended and/or T/F format. The gender gap stands out insofar as the size of the differences in the odds of answering correctly between men and women are narrower than the education or age gaps. Perhaps even more distinctive about the gender gap in knowledge is the reduction of the gap when items asking about gender-specific topics are used, a feature that is not related the other two sources of knowledge inequalities analyzed here (age and education). The third feature setting the gender gap apart is the absence of significant differences in the likelihood of giving an incorrect answer, save for items inquiring about party positions and policies. In a nutshell, we demonstrate a systematic gendered pattern of no response: women are more likely than men to choose the option DK to a survey knowledge question. Men's scores might be inflated by the lucky correct guesses they are more likely to attempt. This finding is the most direct empirical demonstration that conventional survey items underestimate the knowledge of women in comparison to men. A similar pattern emerged in the case of 20year-old respondents: to the best of our knowledge, no prior study has uncovered such age differences, and this pattern will definitely warrant further investigation.
One of our study's key corollaries is that at least part of existing research's findings regarding gaps in political knowledge is driven by methodological choices. This also carries a series of implications for future survey design: limiting the measurement of knowledge to electoral and partisan politics, as the CSES, but also several postelectoral surveys around the world currently do (for instance, the European Elections Study [EES]), might systematically overestimate the size of the typical gaps in knowledge and not only the gender gap as many studies show. Surveys aspiring to measure citizens' knowledge about the political world in a valid manner should include items inquiring about different substantive content, and not only elections or partisan politics. Measuring political knowledge is by no means an easy task but it is important to develop innovative paths in the elaboration of measures of factual knowledge. Including more knowledge items in surveys and a wider range of content might allow the construction of more accurate knowledge indexes than those typically used in existent scholarship (containing three or at maximum four knowledge items privileging a particular definition of knowledge-partisan and electoral). Combining questions across diverse content types might result in a more accurate sense of what citizens know about the political realm.
Another recommendation informing future survey design regards the format. Given the findings showed here, we recommend the use of a closed-ended format with at least four possible options. This format not only minimizes the risk of lucky guessing as some scholars have proposed (see, for example, Robison, 2015) but also avoids overestimating the size of gender and age knowledge differences. Since there has been a clear predominance in the use of open-ended items in the CSES (around 72 percent of the total items analyzed here are of this format, see Table 1), and T/F formats in the EES, this appears to us as an appropriate recommendation.
Our analyses also reveal that when knowledge items focus on the recognition of the names of political actors (as is often the case in most of the current surveys), they might overestimate the size of the education and age gaps, but not of the gender gap. These findings suggest that surveys measuring knowledge should include items not only inquiring about names, but also numbers, words, and images, so that the cognitive abilities required to correctly answer the questions are diverse and the measurement does not favor one over the others. In any case, we offer here compelling evidence suggesting that numbers do not seem to involve an additional level of difficulty for women or young people.
Traditional knowledge batteries have mainly focused on the accretion of facts. The emphasis on the capacity of citizens to remember facts implies giving priority to their declarative memory, while leaving aside their procedural memory. Yet political competence is not only the ability to know specific political facts but also to understand and to employ them in one's own interests. Recent studies have suggested that we should avoid focusing on the findings of what people do not know, and instead try to understand what they really know and how they use the information they receive in the context they live and according to their own personal experience. To this respect, traditional knowledge batteries are being questioned as valid measures of what people really know and understand about the political realm (see, for instance, Cramer and Toff, 2017). Our findings add to this line of criticism by asking for more openness and creativity in the design of survey questions trying to measure what people know about politics in a balanced and realistic manner.