The dynamics of political and affective polarisation: Datasets for Spain, Portugal, Italy, Argentina, and Chile (2019-2022)

The TRI-POL project explores the triangle of interactive relationships between affective and ideological polarisation, political distrust, and the politics of party competition. In this project there are two complementary groups of datasets with individual-level survey data and digital trace data collected in five countries: Argentina, Chile, Italy, Portugal and Spain. These datasets are comprised of three waves carried out over a six-month period between late September 2021 and April 2022. In addition, the survey datasets include a series of experiments embedded in the different waves that examine social exposure, polarisation framing, and social sorting. The digital trace datasets include variables on individuals’ behaviours and exposure to information received via digital media and social media. This data was collected using a combination of tracking technologies that the interviewees installed in their different devices. This digital trace data is matched with the individual-level survey data. These datasets are especially useful for researchers who wish to explore dynamics of polarisation, political attitudes, and political communication.

The TRI-POL project explores the triangle of interactive relationships between affective and ideological polarisation, political distrust, and the politics of party competition. In this project there are two complementary groups of datasets with individual-level survey data and digital trace data collected in five countries: Argentina, Chile, Italy, Portugal and Spain. These datasets are comprised of three waves carried out over a six-month period between late September 2021 and April 2022. In addition, the survey datasets include a series of experiments embedded in the different waves that examine social exposure, polarisation framing, and social sorting. The digital trace datasets include variables on individuals' behaviours and exposure to information received via digital media and social media. This data was collected using a combination of tracking technologies that the interviewees installed in their different devices. This digital trace data is matched with the individual-level survey data. These datasets are especially useful for researchers who wish to ex-plore dynamics of polarisation, political attitudes, and politi-These data have allowed us to create a set of variables on individuals' behaviours and exposure to information received via digital and social media.
• The survey data can be easily matched with the digital trace data. In the materials on the project website, there is a document entitled "FAQs -TRIPOL digital trace datasets" that has information on how to merge the datasets. The merging of these datasets results in a unique comparative dataset. • To the best of our knowledge, the online survey data contain the most exhaustive list of indicators of affective and ideological polarisation to date, allowing for the construction of many individual-level indicators of polarisation. • Those researching electoral behaviour can benefit from these data to understand the micro-foundations and contextual factors that could alter affective political polarisation in today's democracies, especially regarding the influence of digital media consumption, and social media activity.

Objective
The objective of this dataset is to provide researchers with high-quality data on political polarisation, political trust, party competition, and political communication. To our knowledge, the datasets contained in this project include the most exhaustive batteries of survey questions on polarisation, both ideological and affective, as well as trace data on individuals' online behaviours, which more accurately reflects online activities than self-reported measures. Both the trace and the survey data in the TRI-POL project are available for five different countries over three different waves, providing researchers with much greater leverage in analysing over time and across contexts.

Data Description
Within the TRI-POL project there are two groups of datasets. The first group consists of survey data collected from the panel participants in each of the five countries, available in Excel (.xlsx) format. The survey data files are names "TRI_POL_XX" where the XX corresponds to the country code (AR for Argentina, CL for Chile, ES for Spain, IT for Italy, and PT for Portugal). In the second group are datasets for each of the five countries with the digital trace data collected from participants' online activities (available in .csv format). Each of these datasets is named "Passive_meter_data_XX_WX" where the first XX corresponds to the country code and the second X corresponds to the wave number. There are fifteen digital tracking datasets in all: three datasets, one for each wave, for each of the five countries. Both groups of datasets as well as additional supplementary files such as questionnaires and data protocols are available (in .pdf format) at the project's Open Science Framework (OSF) site (DOI 10.17605/OSF.IO/3T7JZ ).
The survey datasets contain data from micro-level online panel surveys of the voting age population in five countries, with three waves carried out over a six-month period between 2021 and 2022. The survey datasets include three embedded experiments (one per wave). Table 1 shows a list of the main variables in the survey data organised by topics. The surveys include questions on socio-demographic characteristics, self-reported voting behaviour and intentions, non-electoral political participation, membership and involvement in a broad array of social and political associations and organisations, an equally ample catalogue on media consumption and internet and social networks usage, a rich battery of measures of political and affective polarisation, a wealth of variables tapping into political opinions, attitudes, and orientations, a series of indicators on trust in political parties and institutions, as well as in people of several relevant social groups, and a profuse set of evaluations of the economic and political situation. Most of these questions are asked in all waves, enabling both the study of their evolution and the impact of their changes at the individual level on other variables. The battery of political and affective Political knowledge (waves 1 and 3) (p38 battery)Main problems facing the country (waves 1 and 3) (p3_, Main problem in COUNTRY)Opinions on current situation on different issues (waves 1 and 3) (p10a, Unemployment) (p10b, Education) (p10c, Health) (p10d, Immigration), (p10e, Pensions), (p10f, Corruption) (p10g, Social inequality) (p10h, The COVID-19 pandemic) (p11, Satisfaction with current national government).Media consumption (self-reported) and trust in different media (waves 1 and 3) (p21 battery) ( continued on next page )     polarisation indicators is especially rich in that it includes sentiments towards candidates, voters, and relevant social groups, as is the battery of opinions on salient policy-issues and questions on the placement of respondents and the most important political parties on the left-right ideological dimension. In order to easily distinguish between the different groups of variables, there are prefixes attached to the variable names. The prefix 'g' indicates global variables, which apply to all waves, such as the panellists' unique identification numbers. Sociodemographic variables are identified by an 's' prefix and 'esmP' indicates experimental variables.
In each of the three survey waves, there is a different embedded experiment that provides additional information on the causality behind affective polarisation: one on 1) media effects, and social media exposure; 2) a trust game exploring the effect of priming political polarisation or populist political frames, and (3) a conjoint experiment to assess the effect of partisan and ideological polarisation on social and political tolerance.
The web tracking datasets include variables measuring the daily number of visits or seconds spent on a set of given webpages or groups of webpages, as well as to specific content (e.g., political articles) within those webpages. Specifically, it includes measures about the average number of visits or time spent on the top 50 most popular news media outlets in each country, as well as on the most used social media networks globally. It also includes the visits and seconds spent on URLs defined as opinion articles, news in general and national, regional, international, and political news. In addition, the datasets also include variables about the visits and seconds spent on specific Twitter profiles (the ones used for the experimental design). All variables included in the datasets are repeated three times: one variable documenting the average behaviour during the 15 days before starting the survey, one documenting the average during the 15 after starting the survey, and a general one documenting the average behaviour during the entire month of tracking. The prefixes "PRE", "POST" and "ALL" are used to distinguish the tracking period used to compute the averages.

The Survey Design
In the sections below, we explain in more detail the methodological strategy for both the survey and digital web tracking data collection. In Section 2.1, we begin by discussing the timing of the waves in each country, summarised in Table 2 , followed by the recruitment, participation and retention of respondents across waves. Table 3 provides information on the number of respondents in each sample and what proportion of those respondents were tracked. Then we  have an explanation of recruitment, acceptance, rejection, and overall participation rates, including why certain participants were rejected. These figures are presented in Table 4 , followed by the attrition rates in Table 5 . We then conclude Section 2.1 with a brief description of the experiments that are embedded in the different waves of the survey. In Section 2.2, we move into more specific information on the collection of the digital tracking data. One of the issues we address is how we handled the undercoverage of devices, the proportions of which are found in Table 6 , which is due to one or more devices of a participant not being covered by the tracking application. We also explain the issue of error-induced non-observations in the variables of the digital tracking dataset, with the proportion of respondents with these types of non-observations in Table 7 .

The Survey Panel Design and the Timing of the Waves
The survey data are comprised of a three-wave online panel survey of the voting age population conducted between September 2021 and April 2022 in Spain, Italy, Portugal, Chile, and Argentina. Table 2 displays the timing of the waves for each country.

The Survey Administration and Data Collection
The survey was administered by Netquest using their large online non-probabilistic population representative panel. Netquest ( https://www.netquest.com/es/home/ encuestas-online-investigacion ) is an online people-based data collection company with over two decades of experience. Founded in Barcelona, Netquest currently conducts public opinion studies in 27 countries in Europe and the Americas. To do so, the company relies on online opt-in panels of people who are willing to participate in surveys and to share data about their online activity. Netquest currently works with various market research companies, public institutions, and universities worldwide. Specifically, data was collected using Netquest's metered panels. The Netquest metered panels provide a pool of individuals with digital tracking solutions installed in at least one of their devices, which allows us to complement their survey answers with information about their online behaviours. When the panellists agree to join the metered panel, they must install the meter in at least one device (PC, tablet, or smartphone) and start sending information (passively) to Netquest to become part of the metered panel. Participants receive more incentives if they install the meter in more devices (up to three).

The Recruitment and Data Cleaning Process
For the recruitment of participants, a non-probability quota sampling method was applied, ensuring that the sample reflects the characteristics of the general population of each country in terms of region of residency, gender, and age. The quotas were derived from official statistics of each country. Respondents were selected from a population of panellists which already had the behavioural tracker installed at least six months before the study. Panellists from the Netquest a. Declined participation : a small fraction of those who had initially accepted the invitation (overall, less than 1,9%) declined to participate after learning the goals of the questionnaire or the institution responsible for the study. b. ISO unmet : some interviews (overall, 0,4% of those who had accepted to participate) where discarded because they failed to meet ISO quality standards. Participations are labelled as "ISO unmet" when they fail to meet at least one of the following criteria: 1) the information on gender or age provided in the survey is not consistent with the one previously available in the database; 2) the response time is considered as fraudulent, i.e., the survey is completed in less than 20% of the estimated time; 3) the individuals failed to pass an attention check or 'trick' question. c. Uncompleted interview : a somewhat larger number of interviews (overall, 836, i.e., 7,2% of those who had accepted to participate) were discarded because they were not fully completed. d. Invalidated interview : only 1 case in all waves of those who had accepted to participate were discarded due to software issues (i.e. the program did not save the answers to some questions) e. Closed : one of the largest groups of discarded interviews (859 or 7,4% of those who had accepted to participate) was made up of those who completed the interview but did so only after the field had been closed. f. Quota full : finally, 947 interviews (8,2% of those who had accepted to participate) were discarded because the quota for a respondent's profile had already been filled.
The completion rate (i.e., the proportion of those who successfully completed the survey after accepting the invitation) ranges from 13.0% in the first wave to 97.0% in the third and fourth one, with an average of 69%.  metered panels have -knowingly and consensually -installed digital tracking solutions in at least one of their devices. Due to the difficulty in filling some of the specific cross-quotas with participants from the metered panel, in some cases the tracked participants were supplemented with non-metered panellists. Table 3 shows the number of participants in each country and wave that had the meter installed in at least one mobile (smartphone or tablet) or PC device compared to the full sample of survey respondents.
In the Data Protocols of the five country studies found on the OSF project website is information on the main socio-demographic characteristics of respondents by panel wave. The sociodemographic features of the participants are remarkably stable across the surveys and very close to the official statistical records.
For each country wave, Table 4 shows the number of invited participants, those who accepted the invitations, and those who failed to complete the questionnaire due to various reasons. The overall acceptance rate to participate ranged from 34.8% in Argentina to 77.9% in Chile. Participation rates are lowest for the first wave in all countries. In accordance with Netquest's standard procedures, the original data retrieved from the participants were cleaned to conform to ISO procedures. In particular, some interviews were discarded either because the socio-demographic profiles did not match those in the database in terms of gender or age, because the time a respondent took to complete the whole survey was at least 20 percent lower than its estimated duration, or because individuals failed to pass an attention check or 'trick' question aimed at confirming that the participant was paying attention. The combined number of interviews dropped due to any of these ISO criteria being unmet was remarkably low, ranging from 0.4% in Chile, Argentina, and Portugal, 0.5% in Spain and 0.6% in Italy. A somewhat larger number of interviews was discarded because they were incomplete, i.e., they had been started, but not finished, or were invalidated because the program did not save the answers to some questions (ranging from 24% in Italy and Chile to 35% in Spain). Finally, surveys were also removed because they had been completed after the data collection window had closed; or because the quota for a respondent's profile had already been filled (only wave 1). After taking these different situations into account, the effective number of completed surveys across countries oscillated between 13% and 34% in wave 1 and 93% to 98% in waves 2 and 3.

Wave Attrition
To illustrate the rates of attrition across waves, Table 5 displays the number of respondents who completed questionnaires in each wave (row 1) and each pair of consecutive waves (row 2), the immediate rate of permanence from one wave to the next (row 3), the number of respondents who completed a wave's questionnaire and all the former ones (row 4), and the corresponding cumulative rate of permanence (row 5). The second wave is nested in the first, in that respondents of the second wave are a subsample of the first wave. Hence, the second wave's figures of row 1, row 2 and row 4 of each country are identical (in the case of Spain n 2 = 1,162). Likewise, the third wave is nested in the second wave and therefore also in the first (n 3 = 1,080). The immediate rates of permanence in row 3 capture the proportion of panellists in each wave who completed the survey in the next one. These rates are considerably high, ranging from 81% to 93%. The cumulative rates of permanence in row 5 capture the percentage of first-wave panellists who completed each wave; hence, the higher they are, the lower the attrition in the panel, which is one of the main concerns with micro-panel survey data. The figure for the cumulative rate of permanence of the third wave indicates that the percent of those who completed the first, second, and third waves varies between 69% and 84%.
The three waves in each country were designed to be successively nested. For example, in Spain, the 1,289 completed interviews in wave 1 is also the cumulative number of completed interviews at this stage. Wave 2 was effectively nested in wave 1. Therefore, all those who completed wave 2 (1,162) had also completed wave 1. This means that 1,162 is also the figure of consecutively completed interviews (i.e., of those who completed the current wave, in this case, wave 2, and the immediately previous wave, in this case, wave 1). Moreover, 1,162 is also the number of cumulatively completed interviews (i.e., of those who completed the current wave and all the previous ones). Likewise, wave 3 was nested in wave 2, meaning that the number of completed interviews in wave 3 (1,080) is also the number of consecutively completed interviews at this stage and, given that wave 2 was in turn was nested in wave 1, it is also the number of cumulatively completed interviews. This nesting strategy was used in each of the five countries.

Basic Strategy for DK/NA
In dealing with responses that indicate uncertainty (i.e. "don't know", "no option" or "decline to answer"), we follow recent literature on minimizing item nonresponse without adding additional error [1][2][3] or violating ethical norms of voluntary participation. In line with recommendations from Derouvray and Couper [4] , we provide respondents with the option of skipping a question, while reminding them of the importance of their responses and request that they confirm they would like to skip the question and continue with the survey. We did not provide respondents with a "don't know" option except for questions that require some specific knowledge, where we expected such a response could be accurate and appropriate, such as questions on placing parties on the left-right scale or questions on political knowledge. More details on the strategy for dealing with nonresponse can be found in a previous Data in Brief article by the PI, Mariano Torcal, and his research team [5] .

The Experiments Embedded in the Different Waves
A different survey experiment was embedded in each of the three waves of the survey in all five countries of the study, maintaining as much consistency in the content of the experiments as possible across cases. More information on all these experiments can be found at https://osf. io/3t7jz/?view _ only=22e6 69dfd9a946d5b70 6e0efcd584d7c .
(a) Wave 1: An experiment to measure the influence of exposure to social media on political attitudes The experiment embedded in the first wave of the TRI-POL survey was intended to test the effect of exposure to different Twitter accounts on a set of relevant political attitudes, such as political interest, affective and ideological polarisation, and political trust. Participation was restricted via invitation. Specifically, respondents were invited to follow one or two Twitter accounts from a list provided to them over a period of seven days. Two experimental groups were created with different lists of Twitter accounts. Using a computer algorithm, participants were randomly assigned either to the first group with a list containing accounts of political leaders of the main parties, or the second with a list of institutional accounts. After seven days, respondents who participated in the experiment were re-contacted, answered some questions about their exposure to and the content of the selected Twitter accounts, and completed the survey questionnaire about their political attitudes and opinions. Information was collected with the behavioural tracker to verify respondents' activity on Twitter. (b) Wave 2: An experiment on "polarisation, populism, and interpersonal trust in five countries" This study examines the effects of priming political polarisation or populist political frames on political polarisation as measured in interpersonal trust discrimination via behavioural games (i.e. trust games) and measures of political affect (feeling thermometers). Via simple randomisation, respondents were assigned to one of 5 groups: Control, Polarising Treatment, Unifying Treatment, Dispositional Issue Frame (populist) and Situational Issue Frame (non-populist). (c) Wave 3: A conjoint experiment on social sorting There is increasing attention in the comparative literature to the origins and consequences of affective polarisation, a phenomenon referring to citizens' growing sympathy towards co-partisans and antagonism towards supporters of other parties. Partisan animosity is reflected in a reluctance to engage with opposing partisans in non-political settings, lower levels of general social trust, as well as discriminatory behavior. Affective polarisation tends to be fueled by a strengthening of political and social identities and, in particular, the increasing alignment of social identities along party lines, i.e. 'social sorting'. However, most of the comparative research exploring how social and political identities reinforce affective polarisation is based on observational studies, which does not allow testing the causal impact of these identities. This study tackles this limitation by using conjoint experiments in five multiparty systems which ask respondents to choose among hypothetical profiles of families moving to live next door, whose attributes varied along a number of characteristics including partisan support, ideology, ethnicity, immigration status/country of birth, sexual orientation and other country-specific relevant features. We have also included some "placebo" attribute (factor) such as "pet owner/no pet owner" that could work as a baseline to estimate the different effects of certain factors in selecting a specific neighbor profile. Asking respondents to choose between neighbors aims to capture social intolerance -i.e. the unwillingness to accept persons or groups with values and behaviors different from one's own by means of a co-existence -, which constitutes a fundamental threat to social cohesion.

General Approach
Data about the participants' online behaviours were collected for the 15 days prior to and after the participants started each survey wave. The behaviour tracker captured each URL (or app for mobile devices) accessed by the panellists, with timestamps for when the panellists first visited the URL, and the number of seconds in which the URL remained active in the browser. A URL was considered active when it was the one being displayed in the browser, meaning that other URLs that may be open in other tabs were not considered to be active. The number of active seconds was measured as the time between the URL (or app) first becoming active in the browser (i.e., displayed to the respondent) and a different URL (or app) becoming active in the browser. A visit was defined as any opened URL lasting one second or more.
TRI-POL researchers did not have access to the raw data with information about all URLs and apps visited by panellists, and their respective timestamps, to minimise any potential ethical concern linked with this project. Instead, a list of variables and guidelines on how to compute them was developed and sent to Netquest to implement. The guidelines can be checked in a document on the OSF project website. Netquest created and delivered several anonymised structured datasets, which complied with our specifications. Those databases were then processed by members of the TRI-POL project to create the datasets here described i.e., three separate datasets for each country, one for each wave. The data collection, processing and analysis approach was designed following the best practices recommended by Bosch and Revilla [6] in the Total Error framework for digital traces collected with Meters (TEM).

Tracking Technologies Used
Participants were tracked on iOS and Android mobile devices, and Windows and MAC computers, using the tracking solutions provided by Wakoopa ( https://www.wakoopa.com/ ). Specifically, Windows and MAC devices were tracked with desktop apps and/or web browser plug-ins, android devices through apps and iOS devices through manually configured proxies. Information about which technologies were used to track participants was requested from Netquest, which is provided in the databases. Table 1 of the passive meter data protocols of each country on the OSF project website provides more information about the characteristics, benefits, and limitations of the different technologies used.

Tracking Undercoverage
All the variables in the web tracking datasets measure behaviours at the individual level (e.g., how much time someone spends reading news articles). Nonetheless, to achieve a complete vision of what a participant does online, we would need to track all the devices that they use to go online. If this is not achieved, only a partial image of a participant's online behaviour is observed, which can lead to errors such as underestimation of univariate estimates. This is known as tracking undercoverage [7] .
Given that we were using a sample of participants who were already being tracked by Netquest, it was beyond our control to make sure that every participant was being fully covered. Following Bosch and Revilla's [7] recommendations in the Total Error framework for digital traces collected with Meters (TEM), we report the proportion of participants affected by tracking undercoverage.
To identify which participants were not tracked on all the devices they use to go online, we follow the best practices outlined by Bosch and Revilla [8] . We needed two pieces of information: which devices were tracked, and which devices participants used to go online. The first piece of information was obtained using paradata. In terms of paradata, we were able to identify the technology with which participants were being tracked, the type of device, the OS, whether it was a tablet or smartphone. The second piece of information was obtained by asking participants questions about which devices and browsers they used to access the internet during the 15 days before the first survey wave. See questionnaires on the OSF project website.
Combining both sources of information, we were able to identify when an individual was not fully covered in terms of device. Table 6 shows the following proportion of individuals with at least one device not being tracked, for waves 1 and 3.

Dealing with Non-Observations
The variables in the web tracking datasets represent the time or number of visits that we observed individuals doing some specifically defined behaviours. These variables were created by Netquest, querying information from the entire raw dataset of all the URLs and apps that participants visited during the tracking period. For instance, a query could search the number of times that a specific participant ID had entered the webpage "elpais.com". During this data extraction process, only available tracked traces could be used. For those cases in which no traces were observed (e.g., visits to elpais.com), the only reportable information was that no observation was found containing the queried information. Given the non-reactive nature of metered data, nonetheless, clear and transparent guidelines are needed when deciding what value to attribute to non-observations. Specifically, a lack of behaviour can be due to a true absence of behaviour (i.e., the participant never visited the defined URLs / apps in the query) or a failure to capture the data (e.g., because behaviours happened in non-tracked devices).
When tracking errors are present, especially tracking undercoverage, it is not possible to clearly discern when a lack of observed behaviours should be treated as real (e.g., 0 seconds visiting Facebook) or as a missing (i.e., a real behaviour happened, but we did not observe it) without auxiliary information. To account for this, and knowing that our dataset was affected by errors, we implemented an approach to identify when a specific observed lack of behaviour in the TRI-POL database was true or induced by errors, following the TEM's recommendations [7] . The approach followed this step-by-step process: 1. For participants with all devices and browsers tracked, which we identified thanks to the information presented in Section 3.2.3, we considered their non-observations as real. We coded those as 0 (i.e., 0 visits / 0 seconds). 2. For those participants identified as partially untracked, we considered their nonobservations as dubious. 3. Dubious non-observations were then identified as true or not, using auxiliary information.
Specifically, we asked participants whether they had visited, during the 15 days before the survey, some URLs/apps of interest with another device apart from the devices that we knew they were being tracked with. The question was personalised for each participant depending on the devices tracked according to the paradata. If they said "yes" for any domain, we considered their lack of behaviour as error (coded as -2, labelled as "Errorinduced lack of behaviour"). If they said "no" we considered it as a true lack of behaviour (hence, as 0 seconds or visits). Asking this for all the variables in the dataset would have made the survey too long. Thus, we decided to ask this question only for the most important variables for most papers, the ones related to the consumption of Facebook, Twitter and the 10 most popular news media outlets in each country. 4. For all the other variables in the dataset, when a participant was identified as partially untracked, we considered their non-observations as uncertain (coded as -1, labelled as "Uncertain lack of behaviour"), given our impossibility of discerning whether the lack of behaviour was true or induced by errors.
For the specific variables with a clear distinction between real non-observations and errorinduced ones (Facebook, Twitter and the ten most-visited news outlet of each country), we propose the following: • True lack of observed behaviours: leave them as 0, which is their current value.
• Error-induced non-observations: re-code them as missing and exclude those participants from your analyses. Nonresponse weighting approaches can be used to re-adjust the sample.
The information used to identify non-observations as error-induced comes from self-reported data, which can also be affected by errors. Therefore, some participants might as well be wrongly misclassified as missing when following our approach. Users can decide to consider those non-observations as real, being mindful that the decision will most likely inflate the measurement errors of their results. For all the other web tracking variables, which only distinguish between real and uncertain non-observations, researchers need to decide whether they treat these non-observations as real (i.e., 0 seconds/visits) or induced by errors (i.e., missing). Although we cannot provide advice on what to do with these non-observations, all the research published so far has treated them as real non-observations (i.e., 0 seconds/visits). Regardless of the treatment, it is recommended to properly inform readers and reviewers about the proportion of participants which might have dubious non-observations treated as real. To have a better idea of the potential prevalence of this issue, Table 7 shows the proportion of participants with error-induced non-observations for the variables measuring the consumption of Facebook, Twitter and the top 10 news outlets in each country.

Ethics Statements
The TRIPOL study and all of its data collection and storage protocols were approved by the Institutional Committee for Ethnic Review of Projects of the Universitat Pompeu Fabra (CIREP-UPF, application number 0181). All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and national research committees and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. This article does not contain any studies with animals performed by any of the authors.
Informed consent was obtained from all individual participants included in the study.

Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Data Availability
TRI-POL (Original data) (Open Science Foundation).