1 Introduction

Having knowledge and understanding of the opinions and attitudes of political elites (e.g., members of parliament, party leaders, local politicians) and of how they process information or take decisions, is key for political scientists. Obtaining this information may require conducting research with political elites as participants. Such studies need to be quantitative in nature if they have one or both of the following aims: (a) To systematically test theories, such as motivated reasoning (Baekgaard et al. 2019), or to assess whether the effects that hold among “the rest of us” also hold for political elites (like the reflection effect, see e.g., Linde and Vis 2017; Sheffer et al. 2018). And (b) to identify general patterns, such as politicians’ role orientations (e.g., Thomassen and Esaiasson 2006) or their attitudes towards European integration (e.g., Freire et al. 2014). To meet such aims, the number of observations needs to be sufficiently high. Since the population of political elites is usually small by definition, a certain response rate is required to achieve a sample size that allows for meaningful statistical analysis. Against this backdrop, one of the key challenges when conducting quantitative studies with the participation of political elites is obtaining a sufficiently high response rate. The population of political elites is not only small, but getting them to participate may also be difficult because they have busy schedules and are regularly shielded by gatekeepers (Druckman and Lupia 2012; Hoffmann-Lange 2008). Of source, the latter issue is faced by quantitative and qualitative researchers alike. Contrary to the relative abundance and coherence of literature on, for example, qualitative elite interviews (Aberbach and Rockman 2002; Efrat 2015; Goldstein 2002; Harvey 2011; Lilleker 2003), the literature on conducting quantitative studies with the participation of political elites is less integrated and more scant (e.g., Dahlberg 2007; Hunt et al. 1964; Maestas et al. 2003; Montgomery et al. 2008; Robinson 1960; Walgrave and Joly 2018). Consequently, we lack systematic knowledge on what are the best practices for: (1) designing the study, such as what survey mode to use (e.g. face-to-face or online), what type of questions to use (closed vs. open) and when to field the study?; and (2) soliciting the participation of political elites.Footnote 1 Our study will provide such knowledge, which is indispensable for addressing the challenge of obtaining a sufficiently high response rate.

By collecting and discussing experiences, lessons and recommendations on conducting quantitative studies with political elites as participants, this paper makes three contributions to the existing literature. Our first contribution is to systematically investigate the drivers of response rates. We lack systematic knowledge about this, as noted by Bailer (2014) and Goldstein (2002), and such knowledge will provide further insights into how best to design a study. Therefore, we compiled a new dataset of 342 political elite surveys from eight large-scale, multi-wave survey projects, covering 30 years and 58 countries. Our second and third contributions are to integrate the scattered existing findings and to present and to discuss the results of an original expert survey among researchers with experience in conducting quantitative studies with political elites (n = 23). This will allow us to identify best practices for designing the study and for soliciting the participation of political elites.

By compiling a comprehensive list of best practices, systematically testing some widely held believes about response rates and by providing benchmarks for response rates depending on country, survey mode and elite type, we aim to facilitate future studies with the participation of political elites. This will contribute to our knowledge and understanding of political elites’ opinions, information processing and decision-making and thereby of the functioning of representative democracies.

2 Existing quantitative research with political elites as participants

There is a substantial body of quantitative literature with political elites as participants. We focus on those quantitative studies in which political elites knowingly and voluntarily participate.Footnote 2 Some of these studies are largely descriptive, examining political elites’ opinions, attitudes, and self-reported behavior typically by means of surveys (e.g. Deschouwer et al. 2014; Wessels 2005). Other studies are more explanatory, examining how political elites process information, reason and decide, oftentimes by means of (survey) experiments (e.g. Baekgaard et al. 2019; Linde and Vis 2017; Sheffer et al. 2018). Table 3 in “Appendix 1” provides an overview of examples of quantitative studies with political elites as participants, describing the study’s aims and/or research question(s), the country or countries in which the study was conducted, the type of data that were collected (e.g., survey or interview), how the politicians were contacted and the response rate. This overview shows that a large number of studies examine political elites’ attitudes and role orientations, focusing mostly on politicians from western Europe and the United States and typically using surveys (e.g. Carey et al. 2006; Deschouwer and Depauw 2014; Esaiasson and Heidar 2000; Herrick 2010; Katz and Wessels 1999; Loewenberg and Mans 1998; Thomassen and Esaiasson 2006; Wessels 2005). An early example of a study on political elites’ opinions is the first wave of the Dutch Parliamentary Study (DPS) from 1968, in which all Dutch MPs were interviewed about their attitudes and political behavior. Five waves of the DPS followed, the latest completed in 2006 (e.g., Andeweg and Thomassen 2010; Thomassen and Andeweg 2004; Thomassen and Esaiasson 2006). Another example is the so-called PartiRep study among members of parliament in 15 countriesFootnote 3 and in 73 statewide and sub-state parliaments that ran between spring 2009 and winter 2012 (Deschouwer and Depauw 2014).

Another strand of work focuses on the behavior of political elites, including their information processing, again studying mostly politicians from western Europe and the United States (e.g. Baekgaard et al. 2019; Helfer 2016; Linde and Vis 2017; Sheffer et al. 2018; Walgrave and Dejaeghere 2017) but also from China (e.g. Meng et al. 2017). These studies often use non-incentivized survey experiments (Baekgaard et al. 2019; Butler and Dynes 2016; Butler et al. 2017; Fatas et al. 2007; Harden 2013; Helfer 2016; Meng et al. 2017; Walgrave et al. 2018). In a non-incentivized experiment, participants do not receive a financial payoff based on their decision; in an incentivized experiment, they do (Ponte et al. 2020).Footnote 4 An example of an incentivized survey experiment with politician participants is Linde and Vis (2017).

Both for quantitative studies with political elites that aim to systematically test a theory and for studies that want to identify general patterns, achieving a large enough sample size—and thus a sufficiently high response rate—is a challenge (Bailer 2014; Maestas et al. 2003). Response rates varied substantially between the studies we reviewed (see Table 3 in “Appendix 1”): from 10% (for mail or online surveys of US state legislators or members of Congress) (e.g., Fisher III and Herrick 2012) to 90% (for a few MP studies, e.g. Thomassen and Andeweg 2004; Thomassen and Esaiasson 2006). As also noted in previous overviews (Maestas et al. 2003; Bailer 2014), studies that collected data by means of interviews (e.g. Loewenberg and Mans 1998) typically have higher response rates than online surveys (e.g. Butler and Dynes 2016; Harden 2013). The variation across countries in response rates is large, with response rates being substantially larger in Belgium than in, for instance, Canada or France.

Note that a low response rate is not necessarily problematic if participants do not differ from non-participants (Montgomery et al. 2008).Footnote 5 Groholt and Higley (1972), for instance, found that an extra effort to increase response rates did not result in bias in the outcome of interest, indicating that (non) participants were similar. Hoffmann-Lange (2006) added that while getting elites to participate may be difficult, once they do they are generally cooperative. This leads to survey responses with fewer missing values and a higher data quality than those of general population surveys. Somewhat related, Fisher and Herrick’s (2012) experiment of the comparative efficiency of mail versus Internet surveys among US state legislators shows that while the response rates of mail surveys are substantially higher than those of Internet surveys (approximately 30% vs. 10%), in terms of being representative for the full population, both survey modes result in representative samples. Still, to be able to conduct meaningful statistical analyses, the number of observations should be sufficiently high.

Most of the current methodological advice on conducting quantitative research with political elites as participants, including how to obtain a sufficiently high response rate, comes from researchers who discuss on their own experience (Dahlberg 2007; Groholt and Higley 1972; Hunt et al. 1964; Robinson 1960; Walgrave and Joly 2018), or who analyze a single study (Montgomery et al. 2008). There are a few exceptions to this general observation. Maestas et al. (2003), for example, analyzed 73 studies with political elites as participants that were published in seven top political science journals between 1975 and 2000—34 of which using primarily surveys and five a combination of surveys and interviews—to make recommendations for future research. A more recent example is Bailer (2014), who provides an overview of the larger survey projects and shortly summarizes some key methodological points, hereby also drawing on Maestas et al. (2003). A final example of a more methodologically-oriented study is Fisher and Herrick’s (2012) field experiment mentioned in the previous section. Bailer’s (2014) and Maestas et al.’s (2003) recommendations, as well as Fisher and Herrick’s (2012) findings will come back later in this article. Since many issues faced by researchers conducting quantitative studies with the participation of political elites are similar to those faced by researchers working in a qualitative tradition, we also make use of the literature on conducting (qualitative) interviews with political elites (e.g., Aberbach and Rockman 2002; Bailer 2014; Efrat 2015; Goldstein 2002; Harvey 2011; Lilleker 2003). In the following, we first discuss our newly compiled dataset on large-scale survey projects (Sect. 3) and present findings from the dataset’s analysis (Sect. 4). Subsequently, we discuss additional findings from our expert survey in relation to the existing literature and our statistical analysis (Sects. 5 and 6).

3 New dataset on large-scale survey projects

While there are ample hunches on what influences the response rate—see the next section “How to account for differences in response rates?”—, there is hardly any systematic evidence on the effect of study characteristics (like country, survey mode, year, and elite type) on response rates (cf. Bailer 2014). Such evidence is important, because it provides insights into how to design the study. Therefore, we provide some first evidence based on a statistical analysis of a newly compiled dataset with 342 survey samples of political elites. To avoid issues with publication bias, we selected samples from eight larger, multi-wave survey projects that publish methodological information regardless of whether the data are subsequently used in a published article or not. Our analysis focuses on four core variables that emerged from the existing literature and our expert survey on which we could collect data: (1) survey mode (face-to-face vs. paper vs. online), (2) elite level (national vs. local; candidate vs. elected official), (3) country in which survey is fielded, and (4) year of the survey. While the latter variable is not about the study’s design, it provides insights on developments of response rates over time, allowing us to test some widely held assumptions. To analyze the influence of these four variables separately from the idiosyncratic effect of the specific survey project, we needed survey projects that were large enough to show sufficient variation on these four variables. In this way our fixed effects model can control for the survey project in question.Footnote 6 We selected all major survey projects that met this criterion for which we could find information.

To compile the dataset, we first consulted the publicly available information. Here, we frequently encountered unclarities that we tried to resolve first by looking for additional information in published articles using the dataset and, subsequently, by contacting the researchers involved.Footnote 7 We also extended our database by looking at precursors of these projects for which information was available (Allen and Birch 2012; Evans and Norris 1999; Giebler et al. 2009, 2013; Lovenduski and Norris 2003; McAllister et al. 2018; Norris and Lovenduski 1989, 1995; Schmitt et al. 2002),Footnote 8 especially since these older waves allowed us to evaluate the dynamics of response rates over time. The resulting newly compiled dataset spans a period of 30 years (1987-2017), covers 58 countries, and includes studies of members of European parliament (MEPs), parliamentary candidates, elected national parliamentarians and local politicians, using face-to-face, paper and online survey modes to research them.

In our analyses, we used two fixed effects models: (1) one clustered by country with survey project dummies, which allows us to compare survey projects to each other, and (2) one clustered by survey project with country dummies, which allows us to compare countries to each other. Clustering on one variable (e.g., survey project) might correlate strongly with another variable (e.g., survey mode), making it harder to determine either variable’s influence. In such cases, using a fixed effects’ model is a conservative choice: controlling for country and survey project diminishes our chance of finding relations between response rate and the variables of interest (survey mode, elite type, year of study). Luckily, our dataset contains many cases in which different surveys of the same project are fielded in different countries, study different elites, in different years and using varying survey modes, allowing us to estimate our models using these strong controls.

4 How to account for differences in response rates?

What is the effect of four key variables—survey mode, elite level, country and year—emerging from the existing literature in explaining differences in response rates between and across elite surveys? We first briefly summarize the expectations regarding these variables and discuss their operationalization before turning to the results. Regarding survey mode, Bailer (2014) indicates that paper and online surveys typically have lower response rates than face-to-face (see also Maestas et al. 2003). What is more, in their overview of studies, Maestas et al. (2003) found no systematic difference between mail and telephone surveys. Fisher and Herrick (2012), conversely, did find a higher response rate for the mail survey compared to an online survey in their experiment on the effect of survey mode. This indicates that the evidence on the effect of survey mode is inconclusive.

In our empirical analysis, we only explore the main differences between survey modes: is the survey mainly distributed by mail (n = 84 surveys), online (n = 104), both mail and online (n = 47)Footnote 9 or face-to-face (n = 98)? We created a “mixed”-category for the 10 surveys that did not fit these categories. These latter studies often used phone calls and face-to-face interviews to boost what was otherwise a disappointingly low response rate. A survey was coded as “mixed” if over 10% of the respondents were contacted through a method that did not fit into the other four categories. The more intricate differences (for example whether an introduction letter was sent before the survey request or not, how many phone calls exactly, et cetera) are often not easily comparable between the different projects, because they are not consistently reported for each individual survey. The main differences in the way the political elites are approached—other than survey mode—tend to vary with the survey project involved. Therefore, these are taken up by the fixed effects survey project dummies, because of which the effect of survey mode comes out more clearly.

Elite level is the second independent variable we focus on. The literature suggests that more senior politicians respond to survey requests less often than do more junior politicians and that regional politicians respond more often than national ones do (Bailer 2014; Deschouwer and Depauw 2014; Walgrave and Joly 2018). Candidates are also believed to respond more often than parliamentarians (Lovenduski and Norris 2003). As far as we know, neither of these intuitions has been subjected to a systematic empirical analysis yet.Footnote 10 Like for the variable survey mode, we focus on the main differences in elite level, differentiating between national elected politicians (n = 109 surveys),Footnote 11 candidates to a national legislature (n = 63), sub-national (regional or local) elected politicians (n = 13), elected members of the European Parliament (n = 122) and candidates to the European Parliament (n = 38).

The third variable is the country in which the survey was fielded. Cross-national differences in response rates were emphasized by several of our expert-respondents as well as mentioned in the literature (Deschouwer and Depauw 2014; Efrat 2015; de Heer 1999; Walgrave and Joly 2018). Hunt et al. (1964), for example, indicates that institutional differences in legislators’ accessibility may drive differences in response rates across countries. Our dataset includes surveys from 56 different countries, with the number of surveys per country ranging from 1 (Albania, Serbia, Croatia, Israel and Iceland) to 18 (UK, M = 6.1 surveys per country).

The fourth and final variable is year of the survey. The findings on the influence of timing (year) of the survey on response rates is conflicting. Brick and Williams (2013) found that for household surveys, non-response appears to be steadily increasing over time, suggesting that this might also be the case for elite surveys (see also de Heer 1999). However, Maestas et al. (2003) report that response rates of political elite surveys have remained stable over the 1970s, 1980s and 1990s.

We coded the surveys for the year in which they are collected. When data collection spanned multiple years, we coded the year in which the data collection was finished. The earliest survey is from 1987; the most recent one is from 2017. Most surveys (n = 176) were collected between 2000 and 2010. Moreover, we included 16 surveys from the 1980s, 87 from the 1990s and 66 are from 2011 or later. Figure 1 shows the response rates per survey over time grouped per survey project, revealing considerable variation in response rates between surveys within and across survey projects.

Fig. 1
figure 1

Response rates of political elite surveys over time per survey project. ATES Asahi-Todai Elite Survey, BLS Brazilian Legislative Survey (Power and Zucco 2011), CCS Comparative Candidates Survey (2018), EES European Election Studies (2018), EPRG European Parliament Research Group survey (Hix et al. 2016), GLES German Longitudinal Election Study, PELA Parlamentarias de América Latina survey (2018)

What explains the variation in response rates between surveys within and between survey projects? Table 1 displays the results of our regression analysis with fixed effects for country and survey project. Response rates are expressed as percentages, allowing the coefficients to be interpreted as the estimated percentage above or below the reference category. For survey mode, face-to-face (= reference category) surveys obtained the highest response rates, followed by surveys sending out both paper and online surveys (14% lower response rates). Surveys that mainly relied on sending paper surveys by mail achieved about equal response rates as those that relied mainly on online surveys (21% vs. 23%; lower than face-to-face, p < 0.001, but not differing significantly between themselves). The category reported here as “mixed” comprises 10 surveys. As a group, they achieved the lowest response rates. With respect to elite type, surveys of national candidates and sub-national elected officials result in the highest response rates: respectively 18% and 16% higher than a survey of elected members of the European Parliament (= reference category) (both significant at p < 0.05). There is no structural difference between candidates for office and elected officials, as illustrated by the lack of a significant difference in response rates between candidates for the European Parliament and elected members of the European Parliament (= reference category). Similarly, national-level elected politicians are not more or less likely to respond to a survey than their MEP counterparts (= reference category).

Table 1 Response rates by elite type, survey mode and year of survey

Overall, the results for the year of the survey, presented in Table 1, reveal no clear time trend: after controlling for country, survey project, survey mode and elite type, samples do not have increasing or decreasing response rates over time. Figures 2 and 3 in “Appendix 4” visualize the development over time by depicting the response rates for survey projects with multiple waves in the same country and sampling the same elite type. Overall, Figs. 2 and 3 confirm that there is neither a general upward nor a downward trend in response rates over time. In addition, these figures suggest that survey fatigue because of invitations for successive scientific survey waves does not influence response rates in general.

Finally, “Appendix 5” discusses the differences in response rates between countries controlling for elite type, survey mode, survey project and year. The results indicate that there are structural differences between countries, although there is no straightforward pattern that might explain these differences. We come back to this finding in the discussion.

5 Best practices for soliciting the participation of political elites

In addition to compiling a quantitative dataset on large-scale survey projects, we have also collected original qualitative data to identify best practices for quantitative research with the participation of political elites. Specifically, between mid-June 2017 and mid-September 2017, we fielded an expert survey among researchers with experience in this type of research; see “Appendix 2” for the text of the survey. We identified relevant researchers through published work (see Table 3 in “Appendix 1”), our networks and through suggestions of already contacted researchers. In total, we invited 42 researchers to participate in our survey. After one round of reminders, we had received responses from 23 experts (response rate ± 55%). All expert-respondents indicated that they want to improve our understanding of the functioning of representative democracy, which is why they conducted quantitative research with the participation of political elites. In terms of data collected, many respondents have used surveys–sometimes including an experimental component–or have conducted face-to-face interviews. A wide variety of political elites have participated in studies of the expert-respondents, including cabinet ministers, members of Congress or (European) parliaments, party group leaders, regional politicians and local politicians. Prospective candidates for elected offices have also been included. The range of countries that has been covered is broad, although there is a Western bias (mostly north America, west European countries and Israel).

Since our expert-respondents do not comprise the full population of researchers who have experience in doing quantitative studies with political elites as participants, our sample may not be representative for this full population. However, it was also not our intention to obtain representativeness. Instead, the expert survey aimed to gather information, largely qualitative in nature, on issues that are typically not discussed in publications but that are relevant for the research community. Below we present our findings, where possible in relation to the existing literature.

5.1 Designing the study

What are best practices regarding the design of the study? Some literature advises to conduct face-to-face interviews. Such interviews facilitate communication, might help to get the attention of political elites (Bailer 2014; Walgrave and Joly 2018) and political elites seem more willing to accept an interview than to fill in a survey.Footnote 12 This is reflected in findings on response rates across types of study: response rates for interviews are typically substantially higher than for surveys (Bailer 2014; Maestas et al. 2003)—which is what we also find in own statistical analysis. Going against this recommendation to meet in person, Harvey (2011) indicates that in his experience many political elites preferred the flexibility of an interview via telephone. This suggests that when opting for interviews, a mixed approach may be optimal.Footnote 13 When rather using an online method of data collection, Dahlberg (2007) warns that elites sometimes may not often use their official e-mail. Asking them for their preferred contact mode through telephone first might be a safer option.

When to field the study—it should be well-timed, a busy period of the year should be avoided (e.g., December), as should an election campaign period (also see Robinson 1960; Maestas et al. 2003).Footnote 14

In terms of closed versus open questions, it is argued in the literature that elites typically do not like closed-ended questions but prefer open questions that allow them to articulate their views in more detail (Aberbach and Rockman 2002). The type of questions may be less important for non-response. Walgrave and Joly (2018) tested for item non-response in their survey of parliamentarians in Belgium, Canada and Israel and did not find any differences between open, closed and experimental questions. Adding an open question at the end of the interview or survey may offer intriguing insights that can either inform the current study or may be useful for future research.

5.2 Soliciting the participation of political elites

What are best practices for soliciting the participation of political elites? A common approach is by mail first (these days usually email, but see “Appendix 3”), followed by phone (cf. Lilleker 2003). Dahlberg (2007) and Goldstein (2002) warn against providing the survey link right away and advise to send out an invitation first. The first contact is key, many expert-respondents stressed. Positive experiences are with the principal investigator—preferably a professor from a well-known university and ideally someone who the political elites know (of)—making the first contact and also the follow-ups by phone (see also Aberbach and Rockman 2002). Efrat (2015) added that, for junior researchers, an introduction letter from a more senior researcher might help. One expert-respondent, as well as several methodological articles, advised using Dillman’s tailored design: that is, sending a post card first and then contacting again (Dillman 2007; Fisher III and Herrick 2012; Maestas et al. 2003).

Several expert-respondents recommended to first approach the speaker of parliament, party leader or local chairperson to solicit participation of the political elites (also see Groholt and Higley 1972). The looming risk here is that if this person says “no”, you lose a whole party group (one of the respondents indicated that this may be a risk with newer populist parties, but we ourselves experienced it with other parties, too). To gain access, several expert-respondents stressed, knowing someone who knows someone et cetera helps a lot. The same holds for a colleague’s recommendation. What many expert-respondents agreed on, is that persistence is key and that following up is crucial. Several of the expert-respondents indicated that they send out 2–3 reminders,Footnote 15 others send one reminder first and then call. Going against the perhaps intuitive idea not to be too pushy—which is supported by many expert-respondents mentioning that they usually follow up 2–3 times—some respondents stressed that you can, and actually should, follow up until you have received a “hard” no (cf. Dahlberg 2007 vs. Walgrave and Joly 2018). One respondent indicated that it may take up to six contact moments before scheduling an appointment; Walgrave and Joly (2018: 2227) even mention up to nine contact moments. To get the political elites to participate, it is especially the study’s social and political relevance that should be emphasized, according to the expert-respondents. Groholt and Higley (1972) emphasize that the study’s explanation should be simple and general, since political elites may be more likely to defer the survey to a staff specialist if the explanation is too detailed. Other issues researchers need to address in their invitation letter are anonymity and the duration of the study (Efrat 2015; Fisher III and Herrick 2012; Goldstein 2002; Maestas et al. 2003). Quite a few of the lessons and recommendations of our expert-respondents were also rather obvious. For example, researchers must make sure that the invitation letter looks professional (e.g., is printed on official letter head) and that it is personalized. “Appendix 3” presents some further practical suggestions from our expert-respondents.

6 Pitfalls, opportunities or other issues and new developments

In addition to asking our expert-respondents about how to solicit the participation of political elites and design the study, we have also asked them to reflect on pitfalls, opportunities or other issues regarding conducting quantitative research with the participation of political elites. There were no real patterns in their responses, but some responses stood out. A pitfall mentioned was that with surveys that are filled in without a researcher present (such as most Internet surveys), it might not be the political elites themselves who answer. The easy-to-implement recommendation here is to add an additional question at the end of the survey asking whether he or she is a politician or a staff member. The fact that political elites talk to each other was seen both as an opportunity and a pitfall, since it might help to obtain higher response rates but may also result in spillover effects. Several respondents mentioned the potential of a new initiative of having a global sample of political elites. These elites could be surveyed one or two times per year, avoiding survey fatigue, with a combined survey by multiple researchers (see also Maestas et al. 2003). The survey’s global nature would enable asking questions that hitherto could not be addressed and would likely generate sufficiently high response rates, since a global partnership could signal quality and status.

Finally, we asked our respondents about new developments in studying political elites that they were working on, or that they had heard of and were enthusiastic about. Here, several of the expert-respondents mentioned the opportunities arising from conducting survey experiments or lab experiments with political elites. It is important here, some mentioned, that researchers do not “pollute” their respondents with too many studies—the global approach we mentioned earlier would be a way to accomplish this. Another exciting development mentioned is trying to get politicians to participate in group tasks, despite the obvious logistic hurdles here. Several respondents also mentioned the opportunities arising from working with political elites and getting them involved actively in the study.

7 Discussion and conclusion

For a better understanding of the functioning of representative democracy, it is important to conduct research with the participation of political elites. To identify best practices for (1) designing such studies and (2) soliciting the participation of political elites, we compiled a new dataset of large-scale survey projects (n = 342), conducted an expert survey to obtain original qualitative data (n = 23) and integrated scattered existing findings. Table 2 summarizes key findings on designing the study from our statistical analysis (top of table). The table also presents some best practices from our expert-respondents on both designing the study (top of table) and soliciting the participation of political elites (bottom of table). While the type and number of responses to the expert survey do not allow us to test the recommendations systematically, the expert survey did provide much information that is usually not reported in published work. This survey also illustrated the potential of this kind of research, as our expert-respondents were unanimously positive about their experience with conducting quantitative research with political elites. While the lessons the researchers learned and what they recommended varied, there was also substantial overlap across the responses and with the existing literature.

Table 2 Best practices on designing the study and soliciting political elites’ participation

Our statistical analysis allowed us to systematically test several variables that were hypothesized and/or found to influence response rates. Interestingly, the findings contradicted some widely held intuitive beliefs about response rates. For example, we did not find that national-level parliamentarians were less likely to respond to surveys than members of the European parliament. Neither did we find that elected officials were less likely to respond than political candidates. Instead, for the variable elite type, we found the highest response rates for surveys of national candidates and sub-national politicians.Footnote 16 For survey mode, we did not find a significant difference between paper surveys and online ones, but we did confirm the expectation that face-to-face surveys would have higher response rates compared to these other two categories. While there were structural differences in the response rates across countries, the pattern here was not fully clear. The differences across countries in response rates may be a reflection of differences in countries’ survey traditions (Couper and De Leeuw 2003; De Leeuw 2005). Mediterranean countries had somewhat lower response rates, which is in line with cross-national surveys on screening (e.g. O’Neill et al. 1995) and smoking (Huisman et al. 2005). Explaining why this variation across countries exists is an interesting avenue for further research.

Finally, and contrary to what is generally observed in the literature on surveys (Couper 2017; Stedman et al. 2019; but see Maestas et al. 2003), we did not find a decreasing trend in response rates over time. What might explain this remarkable finding? The surveys in our sample were all drawn from large scale survey projects, since these allowed us to compare the effects of survey mode, elite level, country and year while also controlling for the specific survey project. Our finding that the response rates did not systematically decline over time supports the suggestion by some of our expert-respondents that the reputation of a large-scale survey project helps boost response rates compared to single shot studies. With the increasing number of surveys with the participation of political elites that are currently conducted by researchers, it might soon be possible to more directly investigate this reputation effect. Having a sample based on a very large number of single shot studies would allow for the large-n assumption that variation in survey project approach is random. Under that assumption, no control dummies per survey project are necessary, allowing the model to assess the influence of the size of the survey project. Such follow-up studies can also further investigate what drives the differences in response rates across countries and elite types.

In this article, we integrated the existing literature and presented findings of an original expert survey to identify best practices regarding designing quantitative studies and soliciting participation of political elites. Moreover, we tested some widely held assumptions about what explains the variation in response rates across and between large-scale survey projects, which can also help to successfully launch a quantitative study with the participation of political elites. It is our hope that these contributions will facilitate and encourage the already increasing trend in conducting such studies, hereby providing indispensable knowledge and understanding of the functioning of representative democracies.