Fieldwork experiences and monitoring techniques based on round 9 of the European social survey (ESS)

The paper outlines fieldwork monitoring techniques that have been used in the fieldwork phase of the European Social Survey (ESS) in Slovenia in round 9 (2018), previous rounds and other national surveys in Slovenia. The transition to computer-assisted personal interviewing which we adopted a decade ago at the Public Opinion and Mass Communication Research Centre (POMCRC), University of Ljubljana, has been consistently used throughout the national research program. The ESS was one of the first surveys through which we established higher standards by using digitized survey tools. Over the years, these tools have been continuously developed and digitalization of contact data has been introduced, which has opened up additional possibilities for monitoring and maintaining survey data quality. Because of our awareness of unwanted interviewer behaviour and the human factor in recording people’s opinion, we anticipate various anomalies that might occur and could be observed and dealt with during the fieldwork and later on, such as: response rate, interviewers' week-by-week work dynamics, the result of the last visit, logical and consistency checking, duration of an interview and other factors. In the report we would like to show some of these indicators in a comparative context.


Methodological requirements and quality of the survey
Before we illustrate some of the most significant characteristics of the ESS fieldwork dynamics in Slovenia, let us outline some of the initial goals of the project. Firstly, it involves survey data collection with the most sophisticated and verified methodology, comparative analysis of attitudes, values and life practices at the cross-national and longitudinal level, and survey data dissemination. Secondly, the ESS provides open access to its comparative crossnational data for academic and non-academic users. Thirdly, it develops and refines quantitative research methods in a comparative framework (Jowell, Roberts, Fitzgerald, & Gillian, 2007, Malnar & Kurdija 2010. These starting points provide the basis for the implementation of specific methodological rules and guidelines which aim at maximizing the equivalence of the survey conditions to avoid potential national implementation disparities that may affect data differences. This applies to procedures during the preparation phase as well as the procedures and approaches in the fieldwork phase itself. Interviewer preparation and training is becoming increasingly important as the quality of the data is strongly linked to the quality of the interviewer's work. More and more effort is being devoted to a harmonized training approach; a clearly defined set of procedures must be respected by all participating countries. From the very beginning of the project, one of the key methodological requirements was the 70% response rate. While ESS specifications still suggest this response rate, it is becoming increasingly obvious that this goal is extremely difficult to achieve in today's general opinion surveys. Many national teams, even in countries with a strong tradition of empirical social research, find such an expectation to be too high. For many participating countries the goal has therefore shifted from strictly meeting this requirement towards an attempt to improve the response rate as much as possible in each subsequent wave. This, under the present conditions and survey climate, may still seem an extremely challenging task, and further consideration should be given to the overall decline in the level of willingness to participate in such surveys (Hafner, Kurdija, & Uhan, 2017). In this respect, the performance of the Slovenian NC team in the last two rounds (R8 and R9) seems more than satisfactory. Given the circumstances we have just outlined, it was not realistic to expect a significant increase in the responsiveness in round 9, yet the activities and efforts of the Slovenian team continue to go in this direction, regardless of the complexity of the task.
The Slovenian NC team began the round 9 preparations with the now habitual enthusiasm. Guided by National coordinators' meetings, dedicated workshops and supporting materials, our aim was to improve or at least achieve the same result as in the previous round. Given that the response rate does not depend solely on interviewers' work, we also focused on increasing control over the quality of the collected survey and contact data. Through our awareness of unwanted interviewer behaviour and consideration of the human factor in recording people's opinion, we anticipated some anomalies that might occur and could be observed and dealt with during the fieldwork and later on. In spite of the high level of confidence in our interviewers, we nevertheless ran the obligatory checks on any unwanted behaviour, e.g. attempting to 'simplify' their work, interviewing easily accessible individuals, skipping difficult questions or not accurately entering the data into the main survey interview interface or the contact form interface tab. We performed a number of quality control procedures, most of them during the fieldwork, while some of them were carried out at the end of the fieldwork. This is reflected in our years of experience in many national and cross-national surveys conducted within the Centre's research program.
After the beginning of the fieldwork, we promptly begin with quality control, i.e. checking the implementation of the rules regarding survey requirements. Prompt quality control is essential when clarifying various issues when interviewers can still recall specific respondent situations. The fieldwork monitoring consists of the following tasks: following the dynamics of collecting data, back-check control, logical and consistency checking, comparing of databases, weekly verification of all collected data, and searching for outliers.
Through fieldwork monitoring we can observe the dynamic of visiting the respondents and conducting interviews, as well as detect whether an interviewer with a high percentage of refusals may be inclined to interview only easily accessible respondents. During the entire fieldwork we monitor the number of completed interviews (also per interviewer) at least weekly. In addition, we compare the entries of completed interviews in the main ESS data entry program with the contact form data entry. Besides the daily monitoring of fieldwork progress (comparing it to the projection) we also monitor the dynamics of the sample (also by PSU) and provide its graphic presentation to the whole Slovenian team (i.e. the fieldwork department and national coordination team). The back-check control is performed regularly every second week with at least 10 % verification by phone (the numbers are obtained independently via the phone book not by the telephone number given by the interviewer). An additional 25 % of the conducted interviews are verified by paper mail. The respondents are sent a paper questionnaire with a few additional question regarding the length and difficulty of the survey questionnaire (e.g. which question in the survey they found the most difficult and whether the incentives had any impact on their participation). Respondents send us feedback using a pre-paid, stamped envelope. On average 50 % of the letters are returned. This kind of back checking is also used in other surveys conducted at the Centre, and has been a 'house rule' for many years.
Due to the considerable duration of the ESS questionnaire, we also anticipate the possibility of data entry errors or typos. In order to deliver clean and credible data, the data are thoroughly checked before depositinglogical and consistency checking. We inspect the average length of the whole interview and the duration of the 'Core module B' per interviewer, we look for outliers and analyse them. By the same principle we analyse missing values (per interview and interviewer). We check interview length and time duration between timestamps as well as the number of missing values per interview on one spread-sheet for a clearer overview. Contextualization is very important, we take into account added interviewer's comments are to help us interpret any of the potential outliers. As we do not want to draw any wrong conclusions in evaluating an interviewer's work, we take great care in every such situation.
The following consistency checks are also applied during the fieldwork:  Checking the years and age entries in module D and F (timing of life and demography)-wild-codes (interviewer exhaustion can sometimes lead to wrong entries)  Logical inconsistencies of the relationship for each member of the household to the respondent  Logical inconsistencies about education and years of full-time education completed  Checking minimum and maximum values where applicable  Checking the range of the discrepancies (with each interviewer) between B27 and C1 life satisfaction and happiness or other values variables.  Straight-lining (B module, Schwartz) Finally, in order to verify that the right respondents were selected for interviewing, we compare all databases (main survey data, contact form data, initial sample list and even paper notes from interviewers) on a daily basis. We compare entries for gender, age, year of birth, date of conducted interview, ID of the interviewer, and the final outcomewhich is the results of the last visit to each respondent. It should be mentioned that when we print out the PSU, the sample list (for interviewer's purpose only) with the individual names, address and year of birth, we omit the last digit in the year of birth. While this is in compliance with the GDPR regulations, it is also another point of control over interviewers' accuracy and veracity.
During the fieldwork period each record was checked, and if any errors or anomalies occurred we strived to correct them as promptly as possible. The verification and correction procedure was based on the following priority scheme; the most reliable information is written in the sample, any illogical or inconsistent data with the interviewer is checked and finally then with the respondent.

R8 and R9 fieldwork dynamic
As said, we carefully monitor the dynamics of the fieldwork in each round and in the final section we present a chart indicating the amount of the completed interviews, along with the response rate line, for the last 3 rounds. The experience from previous rounds help us make the projection for the next round as accurate as possible.
The sign-off procedure for the projection of the fieldwork dynamics was introduced in round 9. Following the ESS rules, the projection was planned based on R8 with a completion date set for mid-January 2019. In round 8, however, we had recruitment for the CRONOS web panel and, accordingly, sped up the entire fieldwork period in order to recruit panellists as fast as possible to conduct the Welcome survey. Therefore the fieldwork period set at the beginning was shorter than in R9. The gap between the projection and the actual response rate can be seen in the next chart.
Graph 2: Fieldwork response rate R8 and R9 (projection and actual) If we are aware of the specifics of R8 and compare the fieldwork dynamics from R9 (red line) with R7 (blue line), the response rate in R9 was comparable and was a little bit higher each round from round seven to eight and nine. This could be due to the incentives introduced in Slovenia in round 8. In round 8 the incentive for respondents was a 7 € gift card from one of the major retail chains in Slovenia. The interviewers got a 5€ extra payment per respondent for each recruit -60 % of ESS main survey respondents were recruited. In round 9 the incentives were even higher -an 8 € gift card. A major change in round 9 was that we had access to the complete sample and had no longer the need to deal with the opt-out list in the sample.
Graph 3: Response rate R7, R8 (sample with and without opt-out list) and R9 To sum up, we are trying to assure data quality as much as possible by engaging all quality control resources before, during and after the fieldwork phase. Based on our experience the key points for conducting fieldwork as efficiently as possible (with high RR and data quality) are: External factors: 1. Proper and stable funding of the ESS survey, including the NC team and fieldwork. 2. The individual sample from the national central population register. 3. Funding which enables the implementation of incentives. 4. The fact that the survey is carried out by a known and acclaimed institution in the national context.
Internal factors: 5. Regular and detailed fieldwork monitoring. 6. Close cooperation between the NC team and FW part of the team. 7. Years of experience in conducting national and cross-national surveys. 8. Loyal and satisfied professional interviewers. 9. Knowledge and understanding of interviewing work (also based on own experience).
In addition, it must be pointed out that the quality of contemporary survey data collection depends largely on the use of the new technological solutions. In the last decade the conduct of the social science research has been strongly linked to the computerization of all stages of the survey process. The method and tools used in the Centre's research program follow this trend in its entirety. This provides the conditions for conducting national and cross-national surveys at the appropriate level. However, we are aware that every country has their own specific conditions. In Slovenia, for example, we have to a certain extent a friendly environment for conducting social surveys -as least as far the ESS is concerned. In that regard we keep our fingers crossed for such favourable conditions for future rounds in Slovenia. Nevertheless, we aware that all our procedures and approaches may not be applicable everywhere.