“Now that you mention it”: A Survey Experiment on Information, Inattention and Online Privacy

We investigate whether information affects consumers’ privacy actions and attitudes


Introduction
Agreeing with the terms and conditions and privacy policies of online service providers has become an almost daily task for billions of people worldwide. 1 By ticking the consent box, online consumers usually give permission to service providers to collect, share or trade their personal data in exchange for various online services. Indeed, personal data lie at the forefront of different business models and constitute an important source of revenue for several online companies, such as Google and its subsidiary DoubleClick, Facebook and Amazon (Taylor 2004, Casadesus-Masanell andHervas-Drane 2015). Despite giving formal consent, consumers are often unaware of what these digital transactions involve (Acquisti et al. 2015b) and have incomplete information about the consequences of disclosing personal information -when, how and why their data are going to be collected and with whom these data are going to be traded (Acquisti andGrossklags 2005b, Vila et al, 2003).
A considerable number of studies (Acquisti 2004, Acquisti and Grossklags 2005a, Acquisti et al. 2015a, Brandimarte and Acquisti 2012, Chellappa and Sin 2005, Jensen et al. 2005, Norberg et al. 2007) and consumer surveys show that consumers are generally concerned about privacy, 2 while the issue of privacy regulation has entered the policy agenda with important challenges being raised, for instance, regarding the scope of government surveillance and the legal framework surrounding data sharing. For instance, reforming data protection rules in the EU is currently a policy priority for the European Commission. 3 At the same time, some online companies (e.g. the search engine DuckDuckGo) use enhanced privacy as a way of differentiating their product (Tsai et al. 2011), or even build their business model around the protection of privacy (e.g. Disconnect.me). 4 The standard approach to privacy posits that consumers use all available information to make privacy decisions considering the benefits and costs associated with revealing personal information (e.g. Acquisti et al. 2015b, Posner 1981, Stigler 1980, Varian 1997. In other words, each time consumers face a request to disclose personal information to service providers, they process the available information and decide accordingly by evaluating the risks and benefits of this exchange (Chellappa and Sin 2005, Culnan 1993, Culnan and Armstrong 1999, Dinev and Hart 2006, Hann et al. 2008, Hui and Png 2006, Xu et al. 2010. Sharing personal information provides consumers with benefits that are tangible (e.g. free access to online services, personalized ads, discounts) and intangible (e.g. the possibility to connect with long-lost friends), but also gives rise to potential 2 For instance, 72% of US consumers revealed concerns with online tracking and behavioral profiling by companies -Consumer-Union 2008 -(http://consumersunion.org/news/poll-consumers-concerned-about-internet-privacy/). 3 In January 2012, the European Commission proposed a comprehensive reform of data protection rules in the EU. The completion of this reform was a policy priority for 2015. On 15 December 2015, the European Parliament, the Council and the Commission reached agreement on the new data protection rules, establishing a modern and harmonized data protection framework across the EU. The General Data Protection Regulation (GDPR) will be a law in the beginning of 2018. http://ec.europa.eu/justice/data-protection/reform/index_en.htm 4 The interaction between privacy protection regulation and market performance and structure is analyzed in Campbell et al. (2015), Goldfarb and Tucker (2011) and Shy and Stenbacka (2015).
costs (e.g. risk of identity theft, shame of exposure of personal information, potential exposure to price discrimination, being bothered by an excessive volume of ads). 5 While consumers may be aware of the many benefits of disclosing personal information, the potential costs are not so clear. There is evidence that consumers tend to disclose their personal information most of the time (Acquisti and Grossklags 2012, Adjerid et al. 2013, 2014, Beresford et al. 2012, Goldfarb and Tucker 2012, Olivero and Lunt 2004); yet, it is questionable whether this is due to the benefits of disclosure generally being considered greater than the associated costs -that is, whether this is an informed and rational choice. To start with, consumers may fail to fully inform themselves, even if the relevant information is readily available. Indeed, although users mechanically accept the terms and conditions by ticking a box, few read the privacy policies (Jensen and Potts 2004, Privacy Leadership Initiative 2001, TRUSTe 2006 and those who do try to read them find them time-consuming and difficult to understand (McDonald andCranor 2008, Turow et al. 2005).
Furthermore, there is growing evidence emerging from psychology and behavioral economics that bounded rationality and several behavioral biases and heuristics influence individuals' decision-making in this realm.
Examples are optimism bias (e.g. Baek et al. 2014), overconfidence (Jensen et al. 2005 and hyperbolic discounting (Acquisti andGrossklags 2003, 2005a). Consequently, individuals face incomplete information, bounded rationality and behavioral biases, which can affect their choices regarding sharing personal information online (Acquisti 2004, Acquisti and Grossklags 2005a, Baddeley 2011, Reidenberg et al. 2015. In this paper, we experimentally investigate to what extent exposure to information about how online companies deal with personal information (trading or not personal data) influences privacy decisions. In particular, we investigate whether information about the degree of privacy protection has an impact on disclosure actions and on privacy attitudes, as well as on social actions. Becoming more aware of the threats 5 The three main benefits of the privacy trade-off identified in the privacy literature are financial rewards, such as discounts (Caudill and Murphy 2000, Hann et al. 2008, Phelps et al., 2000, Xu et al. 2010, personalization and customization of information content Shivendu 2010, Chellappa andSin 2005) and social interactions and network externalities (Lin and Lu 2011). See also Acquisti (2015b) for an overview of the cost and benefits of sharing information for both data holders and data subjects.
associated with disclosure of personal information could influence consumers to change their own individual behavior -for instance by withholding information, but it could also lead to an increased pressure on policy makers to take action -for instance, by implementing more consumer-friendly regulations. In the language of Hirschman (1970), a consumer could react to information about threats to online privacy by "exit" (withholding their own information) or "voice" (asking for more protection for all users), or both. To the best of our knowledge, this is the first paper to investigate both these aspects. In light of the regulatory activism highlighted above, the effect of information on public opinion and on the willingness to engage in social actions is particularly relevant.
As privacy-related stories attract more headlines in mainstream media, 6 an interesting question is to explore how people react to information regarding privacy reported in the news. Thus, we investigate whether news coverage of actual privacy practices by companies affects users' privacy preferences. To address this question, we conducted an online survey experiment, with around 500 respondents, involving an informational intervention. In particular, we use extracts from newspaper articles related to privacy practices of companies like Facebook and Dropbox and ask whether exposure to these shifts users' privacy concerns. 7 Our experimental design involves three treatments. Participants are randomly presented with a newspaper article extract highlighting a positive, neutral or negative privacy practice. We then collect three measures of participants' privacy concerns: a) actual propensity to disclose personal information (e.g. name, email) in a demographic web-based questionnaire that we administered; b) participation in a social action: whether users vote for a donation to be made to a privacy advocacy group or to an alternative, not privacy-related, group; and c) stated attitudes toward privacy and personalization elicited through a survey. Thus, we measure both privacy stated preferences and private and social actions related to privacy.
6 As of 26 Jan 2016, there are 2,170,000 hits in Google news category for the search "online privacy". For instance, The Guardian reported a study where Londoners accepted the terms and conditions for access to public Wi-Fi with a clause stating that they accept to give up their eldest children in exchange for Wi-Fi. Most of the participants accepted the clause, however, obviously, did not have to give up their child (http://www.theguardian.com/technology/2014/sep/29/londoners-wi-fi-security-herod-clause). 7 The information used in the experiment was rated in a pre-test as reflecting either a positive, negative or neutral attitude towards their users, by students of the University of Southampton.
This design allows us to examine two alternative hypotheses. First, previous survey evidence suggests an impact of privacy risks on privacy concerns and on intentions to sharing personal data (Dinev andHart 2006, Malhotra et al. 2004). We therefore expect that, in our experiment, highlighting positive (negative) features of the privacy tradeoff (such as protection of consumers' personal data vs commercial exploitation of their data) will make participants less (more) concerned about privacy and we expect this to be reflected in people's attitudes toward privacy and their willingness to engage is social action promoting privacy protection. Another possible driver of behavior in this study relates to the recent literature in economics that shows that limited attention, salience and cognitive costs impact decision making in a variety of contexts: consumption (Chetty et al., 2009;Hossain and Morgan, 2006;Bollinger et al., 2011;Allcott and Taubinsky, 2015), saving (Karlan et al., 2016), farming (Hanna et al., 2014) or school choice (Hastings and Weinstein, 2008). Two of our informational treatments (positive and negative), contain news articles related to privacy, which might focus participants' attention to the issue of privacy -independent of the actual information that the news item conveys about businesses' privacy practices. Differently to what the first hypothesis predicts, if participants are inattentive to privacy at the beginning of the survey, they might reduce their willingness to disclose personal information upon prompted to focus on the issue in both the positive and negative treatments.
We find that the propensity of participants to disclose identifiable information (such as name, email) and sensitive information (such as mother's middle name) decreases when they are exposed to information regarding privacy. This is true even when the aspect of privacy they read about relates to positive attitudes of the companies towards their users. Just mentioning the presence of privacy issues, such as how companies are adopting practices to protect users' data, decreases self-disclosure. This suggests that privacy concerns are dormant and may manifest when users are asked to think about privacy; and that privacy behavior is not necessarily sensitive to exposure to objective threats or benefits of personal information disclosure. In this regard, the paper connects to recent research that proposes that individuals may only attend to information that they consider relevant for the decision at hand and that interventions that help decision makers attend to key neglected dimensions may improve outcomes (Schwartzstein, 2014;Hanna et al., 2014;LaRiviere and Neilson, 2015). Our result is also consistent with previous findings that contextual cues (Benndorf et al. 2015, John et al. 2011, Hughes-Roberts and Kani-Zabihi, 2014) and notifications about privacy breaches (Feri et al. 2016) do have an impact on levels of disclosure of sensitive information and, more generally, with the literature on salience and framing effects (Stasser 1992, Druckman 2001, Levin et al. 1998, Kahneman 2003. Despite finding an effect on disclosure we do not find treatment effects on social actions, nor on privacy attitudes. In the privacy literature, this disconnect between actions and attitudes -the so-called privacy paradox -is well documented (e.g. Norberg et al. 2007), while we are not aware of previous work demonstrating a disconnect between private and social actions (or, more generally, investigating social actions related to privacy).
The rest of the paper is structured as follows. Section 2 describes the experimental design along with the procedures. Section 3 states the hypotheses, while section 4 presents the results. Section 5 offers some conclusions. Appendix A contains some additional results.

Experimental Design and Sample
To understand the effect of highlighting positive and negative privacy practices on the behavior of online users, we designed an online survey experiment. We recruited a total of 508 participants in June 2015 (in two waves), using Prolific Academic, a UK-based crowdsourcing community that recruits participants for academic purposes. 8 Each participant received the amount of £1 upon completion of a survey that took on average 10 minutes to complete, which translated as £6 per hour, on average. The recruitment was restricted to participants born in the UK, the US, Ireland, Australia and Canada, and whose first language was English. The experiment was designed in Qualtrics Online Sample, and the randomization of treatments was programmed in the survey software. 9 8

Experimental Manipulations
As experimental manipulations, we used extracts from newspaper articles. 10 These news extracts provided information that highlighted a positive, a negative or a neutral aspect of companies' privacy practices and were selected through a pre-test, where we asked 25 students to evaluate the news extracts.
In particular, for the negative treatment we selected a news extract on how Facebook is making money by selling unidentifiable data of their users; and for the positive treatment an article on how Dropbox and Microsoft adopt privacy norms that safeguard users' cloud data (ISO 27018 standard). Finally, for the neutral treatment we selected an article that refers to the health benefits of wearable tech, and is therefore not directly related to privacy issues.
We use a between-subject design, where each participant is exposed to only one treatment -i.e. is exposed to only one of the extracts -before we measure privacy preferences. To further validate our experimental manipulation, at the end we asked participants to classify the three extracts that were part of their experiment as positive, negative or neutral, in terms of the attitude they revealed vis-à-vis users' privacy.

Measures of Privacy Preferences
To start with, participants were shown a brief study description, which mentioned that the study was about online privacy, that data collection was subject to the Data Protection Act 1998, and that The University of Southampton ethics committee had approved the study. 11 We then proceeded to evaluate the effect of our experimental manipulation on three measures of privacy preferences: 1) disclosure of personal information; 2) participation in a social action -voting to allocate a donation to a foundation that protects digital rights or to a foundation unrelated to privacy; 3) attitudes towards privacy and personalization.
For the first measure, designed to test the impact of the experimental manipulation on self-disclosure, participants were initially asked to carefully read one of the statements that are part of our experimental manipulation and indicate whether they had previous knowledge of it. Then, they were asked to reply to 15 demographic and personal questions, covering more or less sensitive information, like gender, income, weekly expenditure, and personal debt situation. 12 The answer to the first 13 questions had to be provided through a scroll-down menu that included the option "Prefer not to say", so that the effort required to answer was the same as the effort required not to answer. Participants could not proceed without selecting an option from the menu. Providing name and email was optional, as a scroll-down option was not possible. Notice that providing false information could potentially be an alternative way to preserve privacy. Prolific Academic independently collected some demographic information when participants first registered with the service. Comparing our data to the demographic data collected by Prolific Academic for age, gender and country of residence, we did not find significant differences, thus indicating that lying is not common (see table 5A in appendix A for detailed information). Moreover, most names matched the emails, when provided. To verify whether participants read the questions carefully, we also included a control question ("This is a control question. Could you please skip this question?"), with a "normal" scroll-down menu (including numbers from 1 to 4 and the option "Prefer not to say"). The last two questions, which were first name and email, were not mandatory. 13 Regarding the second measure of privacy preferences, contribution to a social action, participants were first asked to read the very same statement they had seen earlier and indicate whether they thought that their friends knew about it. The purpose of this was to re-establish the saliency of the provided information. We then asked participants to choose which institution should receive a donation of £100 from us: EFF -Electronic Frontier Foundation (an organization that fights for online rights and, therefore, is concerned with privacy 12 Typically, more personally defining or identifying items, such as name, or financial or medical data are perceived as more sensitive Tucker 2012, Malheiros et al. 2013). 13 The stage introduction read as follows: "Please provide some information about yourself. Note that you can choose not to provide the information by choosing the option "Prefer not to say." This option is available in all the mandatory questions of this section." issues) or Transparency International (an organization that fights against corruption and is therefore not directly related to privacy issues). In particular, participants were informed that "We are donating £100 to charity.

(~ $154 | ~135€). You can choose which organization we donate the money to: EFF (Electronic Frontier
Foundation) or Transparency international. Please note that the institution that receives more votes will be the one receiving the donation", and then were provided with a description of the two organizations. 14,15 For the third measure of privacy preferences, the one regarding attitudes, we started by asking participants to read the statement one more time and indicate whether they thought that society in general knew about it.
Again, this question had the purpose of maintaining the salience of the information. We then asked them to take the survey developed by Chellappa and Sin (2005) that evaluates their concern level about online privacy, how much they value personalization, and the likelihood of providing personal information.
Finally, participants were asked to reply to eleven sensitive questions structured in the same way as the initial questionnaire. For instance, we asked information on the number and gender of sexual partners, passport number, name of first pet, and mother's maiden name. Some of these questions are commonly asked to recover passwords and could therefore be seen as very privacy-intrusive. Sensitive questions might have a higher impact on participants' willingness to disclose information (Joison 2008).
After the experiment, participants were asked to answer some more questions. First, to control for the effectiveness of the manipulation, they were asked to evaluate the extent to which the three news extracts used in the experiment revealed a positive, negative or neutral attitude of the company towards their users. Second, we asked if participants were more or less willing to share their personal data online after reading these extracts and if they were willing to pay a small fee to protect their identifiable and non-identifiable information. Third, we added a final survey about online privacy concerns (Buchanan et al. 2007); our decision to include this was 14 "EFF (Electronic Frontier Foundation) is the leading nonprofit organization defending civil liberties in the digital world. Founded in 1990, EFF champions user privacy, free expression, and innovation through impact litigation, policy analysis, grassroots activism, and technology development." https://www.eff.org 15 "Transparency international has as a mission to stop corruption and promote transparency, accountability and integrity at all levels and across all sectors of society. Their Core Values are: transparency, accountability, integrity, solidarity, courage, justice and democracy." http://www.transparency.org based on the fact that it is designed exclusively to evaluate privacy concerns, contrasting with the Chellappa and Sin survey which, besides evaluating privacy concerns, also evaluates the value of personalization and the likelihood of disclosing information. Finally, we asked some optional miscellaneous questions related to online privacy and the experiment, e.g. whether participants usually read privacy policies or already knew of the two non-profit organizations.

Characterization of the Subject Pool
Our final sample consists of 475 participants. 16 47% of the participants were female and the average age was 28, with 80% part of the "millennial generation" (i.e. born between 1982 and 2004: Howe and Strauss 2000).
Most of the participants were from the US, 63%, or the UK, 27%; 81% were white and 45% were students; 73% had two to four years of college education; and 70% had an income lower than 40.000 (in local currency) 17 . The average time to complete the survey was thirteen minutes (see table 1A in appendix A for more detailed descriptive statistics). These characteristics are balanced across the three treatments.

Hypotheses
Our experimental design allows us to investigate the extent to which participants are using the informational content of the news messages to inform the privacy decisions in the experiment or whether instead the messages serve to focus their attention to the fact that there is a privacy context in the decision-making.
Previous evidence suggests that perceived privacy risks raise privacy concerns and have an adverse effect on willingness to reveal personal information (Dinev andHart 2006, Malhotra et al. 2004). If the messages have informational value, we then expect participants in the negative treatment to be less willing to share information in the first stage, less likely to support the charity promoting digital rights in the second stage and to display more conservative attitudes toward privacy and personalization in stage 3, than participants in the neutral and 16 Out of the initial 508 participants, we rejected 33 submissions in total: 8 submissions from those that fail the control question (which asked to skip that question) and 25 submissions from those that completed the survey in less than 5 minutes. The aim is to exclude those who did not take the task seriously, not reading the questions or doing it extremely quickly. We find similar results running the analysis with the full sample. 17 Information about gender, nationality, education, student and employment status was provided by Prolific Academic. Information about income level was provided by the participants in the self-disclosure stage. Prolific Academic only had information about student status for 465 participants.
positive treatments. Similarly, we expect participants in the positive treatment to be more relaxed about sharing private information than those in the neutral treatment.
Hypothesis 1: (Information) People's privacy attitudes and actions will respond to the informational content of the message they see.
Another possibility is that participants are not unaware of the information regarding privacy that we transmit to them. However, the mere mention of privacy in the messages might make them more attentive to the issue.
This perspective would be consistent with existing theoretical work and evidence that when decision makers have limited attention, inefficient decisions can be made not due to lack of information, but due to failure to attend to some features of the data (Hanna et al., 2014;LaRiviere and Neilson, 2015).
Hypothesis 2: (Inattention) People's privacy attitudes and actions will not respond to the actual information content of the message they see, but their privacy attitudes and actions will be affected when prompted to think about privacy.
Accordingly, we expect participants to be more conservative in treatments negative and positive than in the neutral treatment, as the latter does not focus attention on privacy.

Results
Here we present the results for each of our three privacy measures. Before doing so, we checked whether our experimental manipulation was successful. To do this, we exploited the fact that at the end of each experiment we asked participants to classify the three news extracts used in that experiment as representing a positive, negative or neutral attitude of the company towards its users. What we found is that 84% of the participants considered that the extract chosen for the positive treatment indeed reflected a positive attitude; 93% of the participants classified the extract chosen for the negative treatment as negative; and 59% of the participants classified the extract chosen for the neutral treatment as neutral 18 (see table 2A in appendix A for more details).
Thus, the majority of the participants classified correctly the news extract after being presented with it.

Self-disclosure
To analyze treatment effects, we created a dummy variable that takes the value of 1 if the information is provided and 0 otherwise. We then summated these dummies to create a summary variable called Disclosureindex that can take values between 0 (if no information is provided) and 13 (if information is provided for all 13 demographic questions). We then created three variables measuring self-disclosure, all taking the values 0/1: 2. Give Name: equals 1 if the participant discloses their first name; 3. Give Email: equals 1 if the participant discloses their email address.  Goldfarb and Tucker 2012). However, there was significantly lower disclosure of the information that could identify them as individuals, such as name (only 50% provided their first name) and email (only 37% disclosed their email address). With the exception of three participants, those who disclosed email also disclosed name. We found significant differences in the disclosure of identifiable information (Give-Name and Give-Email), with a higher incidence of disclosure of name and email in the neutral treatment compared to the negative and positive treatments. However, we did not find significant differences between the positive and 18 In the neutral treatment 27% classified the extract as positive and 14% as negative. 19 Age, health situation, marital status, education, number of times moved house, gender, number of children, number of credit cards, debt situation, country live in, maximum relationship length, annual income, money spent per week. 20 See table 3A in appendix A for a full description of the percentages of the use of the option "prefer not to say" per variable and per treatment.
the negative treatments; 21 thus it does not seem to be the case that providing "negative" information makes participants more reluctant to disclose private information.
These results are confirmed in a regression analysis (Table 2), where we estimate OLS regressions for each of three measures of disclosure on a set of treatment dummies, plus a dummy controlling for recruitment wave.
Including -or not including -individual characteristics (age, gender, nationality, ethnicity, student and work status, education level and annual income level) does not change the outcome. 22,23 Also, including a dummy controlling for previous awareness of the information we provide gives similar results (for summary statistics of awareness see tables 6A to 8A in appendix A). 24 As mentioned in the previous section, participants were also asked to disclose particularly sensitive questions, where, as before, participants could disclose information or choose the option 'prefer not to say.' We included 11 items: religious (yes or no), race, number of sexual partners, number of serious relations, partner's gender, weight, high school name, passport number, name of first pet, mother's maiden name, and favorite place. Compared to the demographic questionnaire, participants were more reluctant to disclose sensitive information. For instance, nobody disclosed passport number and 86% did not disclose mother's maiden name. Nevertheless, many participants disclosed information for sensitive items; for instance, 81% disclosed the number of sexual partners (see To analyze treatment effects, for each of the 11 items, we created a dummy variable that takes the value of 1 if the information is provided and 0 otherwise. We then summated these dummies to create a summary variable called Disclosure-index-SQ that can take values between 0 (if no information is provided) and 11 (if information is provided for all 11 questions). Figure 1 shows a histogram with the distribution of this index by treatment and in total 25 . We can see that the distribution for the neutral treatments is shifted towards higher values, i.e. more disclosure. Mann-Whitney tests confirm that there is indeed a significant difference between the negative and the neutral treatments (p-value=0.011) and between the positive and the neutral treatments (p-value=0.025), while we found no differences between the positive and the negative ones (p-value=0.979).
This result is also confirmed in a Poisson regression analysis, with and without controls for individual characteristics (see table 12A in appendix A).
What is the interpretation of these results? Participants do not seem to react in the way predicted in Hypothesis 1 (Information), with positive information inducing more disclosure and negative information increasing their concern for privacy. The fact that participants disclosed more information in the neutral treatment, where privacy was not mentioned, as the information provided referred to the advantages of wearable tech, than in the other treatments suggests that, consistently with Hypothesis 2 (Inattention), being prompted to think privacy issues has an effect on individual online behavior, decreasing self-disclosure.

Social Action
25 Four participants did not choose the option "prefer not to say" for the passport item, as they made some comments, such as "I don't have one" or "I don't understand the reasons to ask for my passport number". We consider this as a form of non-disclosure.
We now analyze the social action. Participants had to vote to assign a £100 donation between two charities. We found that, overall, 59% of the participants voted in favor of EFF, with no significant differences across treatments (pairwise chi2 tests: Positive-Negative: p-value=0.157; Negative-Neutral: p-value=0.838; Positive-Neutral: p-value= 0.228). 26,27 Regression analysis (Table 3), where we can control for individual characteristics as well as for familiarity with the two organizations, confirms the absence of treatment differences. Not surprisingly, we find that the likelihood of voting for EFF increases as people are more familiar with its work and decreases as people are more familiar with the work of the competing charity (see table 13A in appendix A for more details). Thus, it seems that the significant impact we found for the neutral treatment in terms of selfdisclosure does not carry over to the social action.

Privacy Concern Survey
To analyze attitudes towards privacy, we follow Chellappa and Sin (2005) and ran a factor analysis on the survey.
Recall that the first six questions were designed to understand the value that participants ascribe to personalization (questions Att1-Att6), the following four questions were designed to evaluate the level of concern about online privacy (questions Att7-Att10) and the last two questions were designed to understand the likelihood of the participants disclosing their personal data to online service providers (questions Att11-Att12). We found three factors: 1. Factor 1, labeled "Personal", includes Att1 to Att4 (Cronbach's alpha=0.79); 2. Factor 2, "Privacy-concern", includes Att7, Att9 and Att10 (Cronbach's alpha=0.67); 3. Factor 3, "Likely-give-info", includes Att5, Att6, Att11 and Att12 (Cronbach's alpha=0.74). 28 26 Overall, EFF received more votes than TI and, therefore, has received the donation.
27 See table 4A in Appendix A for treatment differences. 28 The factors we find differ slightly from those defined by Chellappa and Sin (2005). In their case, the first factor CS1 (Per) is the average of Att1-Att6 questions; the second CS2 (Concern) is the average of Att7-Att10 questions, while the last CS3 (Likely) is the average of Att11-Att12 questions. In our factor analysis Att8 is not part of any of the three factors. In a Varimax rotation at 0.4 Att8 is eliminated, therefore factor 2 'privacy-concern' is constituted by attributes Att7, Att9 and Att10. Att8 refers to concerns about anonymous information collected For the average of each item (Att1 -Att12), and the average of the factors see table 10A in appendix A.
To evaluate the treatment effects, we created dichotomous variables for the three factors. To achieve this, we first calculated the average score for the questions, scored between 1 and 7, belonging to the corresponding factor. Then, we created a dummy variable for each factor, taking the value of '1' if the average score is strictly greater than 4. Thus, the variables "Personal", "Privacy-concern" and "Likely-give-info" take the value of 1 if the participant valued personalization, revealed concerns about privacy, or displayed a high likelihood of disclosing personal information. We found no treatment differences in the attitudinal survey 29 . A regression analysis confirms that "Privacy-concern" and "Likely-give-info" are indeed unrelated to treatment, whether or not we control for individual characteristics and the value of personalization, as measured by "Personalization" (see Table 4). Looking at individual characteristics, we find that males, unemployed people and high school students tend to be more concerned about their privacy, while those who value personalization are more concerned about their privacy and are less likely to provide information (thus making personalization more difficult). See

Conclusions
In this paper, we explored how people respond to information about privacy in the form of news reports. We experimentally varied whether the information to which consumers are exposed reveals a positive or negative privacy practice of the company or whether information is neutral vis-à-vis users' privacy. We then observed the self-disclosure of personal information by users, their stated concerns regarding privacy and their choice of giving a donation either to a charity advocating for privacy or to a charity not directly related to privacy issues.
What we find is that whenever information is about privacy, the type of information (positive or negative) does automatically that cannot be used to identify the users, such as computer, network information and operating system. The results are similar including att8. not matter, while information not mentioning privacy increases disclosure of personal data, without affecting either stated privacy concerns or social actions.
These findings suggest that inattention may be an important aspect in privacy decision-making. We could then expect that online users will be more careful in the type of information they choose to disclose if privacy issues are more widely discussed in the public arena, for instance because of scandals related to data leakage or thefts (e.g. the recent examples regarding the US Post Office, or financial institution JPMorgan Chase & Co, or big retailers like Target, Kmart and Home Depot). A more cautious attitude in response to data thefts news is not too surprising. Our results, however, suggest that even news about increased data protection for consumers, for instance through legislative initiatives, would trigger the same reaction. Notably, in our setting, users react through personal actions, but not through social actions. This suggests that the "voice" response to privacy issues may be relatively weak, with obvious implications for the political process.
From a business perspective, it seems that making privacy practices more visible and transparent might backfire as this could nudge users to become more reluctant to share personal information and thereby derail existing business models that are based on tracking and sharing personal information. The question of how to reconcile the need to respect the right of users to make informed choices about online privacy with the current business model of a multibillion-dollar industry is a major challenge for policy makers, businesses and academics working in the area.  * p<0.10, ** p<0.05, *** p<0.01 Dependent variables: Disclosure -disclose the information in all the 13 items was scored as '1'; Give Name and Give Email -disclose the information was scored as '1'.
In all the models, we control for recruitment wave. Individual characteristics refer to demographic characteristics as age, gender, nationality (UK or non-UK) ethnicity (white or not) Student and work status, education and income (see table 11A in appendix A for coefficients and significance level of the individual characteristics).  Binary dependent variables: Privacy concern -scored as '1' if the factor concern was higher than 4; Likely give information -scored as '1' if the factor likely give info was higher than 4. In all the models, we control for recruitment wave. Individual characteristics are the same as in the previous tables. Personalization -scored as '1' if the factor personalization was higher than 4. The characterization of the subject pool is based on demographics provided by prolific academic, with the exception of Income, which is based on the information the participants provided during the experiment in the stage of self-disclosure. Panel A refers to the news extracts used in the positive treatments, Panel B refers to the news extracts used in the negative treatments and Panel C refers to the news extracts used in the neutral treatments. The 1 st column of each panel refers to the treatments. The 2 nd column of each panel indicates the percentage of participants that classified the news extracts as positive, in each treatment. Therefore, in the Panel A, the percentage presented in the 2 nd column (Positive), 2 nd row (Positive treatment) indicates how many participants in the positive treatment classified the extract as positive. The percentage presented in the 2 nd column (Positive), 3 rd row (Negative treatment) indicates how many participants in the negative treatment classified the extract as positive. The percentage presented in the 3 nd column (Negative), 2 nd row (Positive treatment) indicates how many participants in the positive treatment classified the extract as negative. And so forth. The 3 rd column indicate the percentage of participants that classified the extracts as negative, and the 4 th column indicate the percentage of participants that classified it as neutral. Use of the option "prefer not to say" is scored as "1" and disclose the information is scored as "0".        Robust standard errors in parentheses. *p<0.10, ** p<0.05, *** p<0.001. Individual characteristics, with the exception of income, come from Prolific Academic and are provided by users when registering. To control for income, we use responses to our own demographic questionnaire, with people not revealing their income as the omitted category.  "Dropbox secures data privacy-focused ISO 27018 standard"

Appendix A: Tables
"Dropbox has followed in the footsteps of Microsoft to become an early adopter of the privacy-focused ISO 27018 standard, which is used to signify how providers safeguard users' cloud data.
The standard sets out a code of practice that governs how users' personally identifiable information should be protected by cloud providers.
Organisations that adhere to the ISO 27018 code of practice, therefore, must vow not to use this information in sales and marketing materials, and must promise to provide users with details about where there data is kept and handled and to notify them straightaway in the event of a data breach."