‘There are some things that I would never ask Alexa’ – privacy work, contextual integrity, and smart speaker assistants

ABSTRACT When new technologies like smart speaker assistants (SSAs) enter private spaces, new threats to privacy emerge. Drawing on the concepts of privacy work [Nippert-Eng, C. E. (2010). Islands of privacy. University of Chicago Press.] and contextual integrity [Nissenbaum, H. F. (2010). Privacy in context: Technology, policy, and the integrity of social life. Stanford Law Books.], this study uses qualitative interviews to explore two questions about SSAs: (1) Which kinds of privacy work do users do?, and (2) What rationales underlie users’ privacy perceptions? We identify a variety of new types of privacy work, such as limiting the dissemination of one’s voice data to a single company, or interrupting conversations during accidental SSA activation. We also identify new privacy rationales including anticipated consequences of information leaks, the importance of users’ privacy skills and awareness, and the role of choice in whether and how to use SSAs. Based on our analysis of privacy rationales, we propose an expansion of the model of contextual integrity [Nissenbaum, H. F. (2010). Privacy in context: Technology, policy, and the integrity of social life. Stanford Law Books.] to improve our ability to understand how users perceive privacy with SSAs and other voluntarily adopted home information technologies. Furthermore, we find that even SSA users who say they have no privacy concerns actually work to protect their privacy. These results complicate previous theory which said that privacy concerns lead to protective behaviour because they suggest that the relationship between these two concepts may be reversed.

Promoted as smart helpers that make users' lives easier, smart speaker assistants (SSAs) are increasingly widespreadin 33% of US (Edison Research & Triton Digital, 2021) and 50% of UK households in early 2021 (Ofcom, 2021).SSAs are voice-controlled by the user and smart by responding to natural language user requests (Hoy, 2018).To produce a response, the user needs to say a wakeword and the always-on device will transfer the request to the manufacturer's servers, where the command is processed and triggers a response.
SSAs create new privacy issues for users of digital technologies, some have argued (e.g., Dubois et al., 2020;Hui & Leong, 2017).They point out several issues: SSAs listen constantly, they are often placed in private spaces in homes and commands are stored on corporate servers, not under control of owners.In 2018, an Alexa user was informed that a private conversation with her husband had been sent out to a contact without her knowledge (Wolfson, 2018), and incidences of accidental orders through Amazon Echo devices by children or triggered by TV advertisements have been reported (Ahmed, 2018).These issues may create special privacy dilemmas for SSA owners.
While potential privacy threats from smart speaker assistants have attracted much scholarly attention, fewer studies explore how users perceive these threats.Furthermore, these studies rarely integrate their research with existing privacy theories resulting in a less nuanced understanding of how SSA privacy is perceived and managed in the home.Using a theoretically-driven analysis of privacy-protective actions based on Nippert-Eng's (2010) 'privacy work' concept, we investigate SSA early adopters' privacy protective behaviour.We also draw on Nissenbaum's (2010) contextual integrity framework to analyse privacy rationales.Based on these findings, we propose an expanded model of contextual integrity to capture how users perceive SSA privacy risks.We suggest that this model could be applied to any voluntarily adopted home information technology.Specifically, we explore two research questions: RQ1: Which kinds of privacy work do users engage in to protect their privacy with SSAs?RQ2: What rationales underlie users' privacy perceptions?
This paper is divided into five parts.First, we review previous research on SSA privacy perceptions and behaviour and present our conceptual framework.Then we describe our methodology and data, followed by our results.Fourth, we present our expanded model of contextual integrity.We close by discussing our results and their implications.

Literature review
Previous SSA research has focused on three aspects of privacy: privacy concerns, privacy protective actions, and privacy rationales.We also include relevant privacy findings from Cowan et al. (2017) on mobile voice assistants and Zeng et al. (2017) on smart homes.First, both users and non-users express several privacy concerns.Non-users worry about being hacked, the motives of speaker companies, and the opaqueness of data collection, processing, use, and monetisation (Lau et al., 2018;Pridmore et al., 2019;Pridmore & Mols, 2020).Users have different concerns, including recording private conversations, getting hacked, or suffering consequences of data transfer, such as physical security risks or governmental and corporate data abuse (Abdi et al., 2019;Cowan et al., 2017;Malkin et al., 2019;Manikonda et al., 2017;Meng et al., 2021;Pridmore et al., 2019).
Second, privacy-protective actions include behavioural and technological measures.Behaviourally, users turn off smartphones' voice assistants functions and SSA features (Abdi et al., 2019;Cowan et al., 2017), and turn off, unplug, carefully place, limit usage of, or return SSAs to manufacturers (Abdi et al., 2019;Malkin et al., 2019;Manikonda et al., 2017;Meng et al., 2021;Pridmore & Mols, 2020;Zeng et al., 2017).Some users own different products and use them for specific tasks to minimise the information provided to any single device (Abdi et al., 2019).Users also learn about privacy policies and terms of service, and inform guests of SSAs in their home (Meng et al., 2021).Smart home and SSA users also choose strong passwords (Meng et al., 2021;Zeng et al., 2017).
Technological measures include reliance on SSA features: muting the microphone, modifying device settings to audibly signal when devices are recording, limiting dropin access to select contacts, and reviewing and deleting recordings, transcripts, and other behavioural logs (Abdi et al., 2019;Malkin et al., 2019;Manikonda et al., 2017;Pridmore & Mols, 2020;Zeng et al., 2017).Some users separate Wi-Fi networks for different smart home devices (Zeng et al., 2017).Lutz and Newlands (2021) identify three types of privacy protective actions among UK SSA users: technical (e.g., turning off, unplugging, muting SSAs), data (e.g., reviewing or deleting recordings), and social (e.g., speaking quietly), but find that most users do not do any of them.
Finally, users' rationales regarding privacy point to privacy beliefs, a willingness to trade privacy for convenience, privacy resignation, and lack of privacy awareness.Users who are unconcerned about their privacy believe that they are not worthwhile targets for hacks and surveillance, that they have nothing to hide, that SSA data is only a small addition to existing data about them, that comprehensive data collection and storage about them is not feasible, that they can trust SSA companies, and that they have sufficiently secured their systems (Lau et al., 2018;Pridmore & Mols, 2020;Zeng et al., 2017).Some users are willing to trade their privacy for convenience (Lau et al., 2018;Pridmore et al., 2019;Zeng et al., 2017), others have resigned themselves to surveillance technologies (Lau et al., 2018;Pridmore et al., 2019).However, researchers have argued that these rationales are often weak because most users have limited awareness of companies' storage policies (Malkin et al., 2019) and of potential security threats (Zeng et al., 2017), and generally limited mental models of how SSAs work (Abdi et al., 2019;Meng et al., 2021).
In sum, previous studies have made significant advances in our understanding of privacy issues with SSAs.However, few integrate their research with privacy theory, resulting in a less nuanced understanding of privacy perceptions and behaviours.Those that draw on privacy theory, use fixed-choice survey questions (Abdi et al., 2021;Apthorpe et al., 2018;Kang & Oh, 2021;Lutz & Newlands, 2021).We use a different methodology: qualitative interviews.A strength of interviews is that they are more flexible and closer to users' lived experience.Using this methodology, we find users have more complex engagement with privacy issues than has been previously identified.

Our conceptual framework
'Privacy' can mean the ability to control access to and dissemination of private things, places, or information.But it can also describe 'the condition of being: alone/without others' demands, interruptions, intrusions,' as well as the freedom to live and make decisions without restriction (Nippert-Eng, 2010, p. 7).
Privacy work is a process of 'selective concealment and disclosure' (Nippert-Eng, 2010, p. 2).It is 'the daily activity of trying to deny or grant varying amounts of access to our private matters to specific people in specific ways' (p.2).To achieve a balance between concealment and disclosure, individuals create 'pockets of accessibility ' and inaccessibility (p. 6).Privacy work differs according to the kind of privacy sought; for instance, control over information access and dissemination, or being free from interruption.
Accordingly, Nippert-Eng (2010) distinguishes between 'secrecy' and 'demand management' forms of privacy work.To control access to private information, individuals engage in 'secrecy' (Nippert-Eng, 2010, p. 24).This involves two clusters of work.First, individuals determine 'if something should be a secret, but also whether or notand howto selectively disclose, [or] conceal (…) a secret ' (p. 35).Second, individuals draw from various 'skills, techniques, knowledge, and mindset necessary to manage secrets' (p.36).Individuals' competencies in these vary.To minimise interruptions while pursuing other activities, individuals engage in 'demand management' (Nippert-Eng, 2010, p. 180).Demand management refers to practices that limit the possibility that an individual's attention will be diverted from their current task; for example, they may postpone responding to a contact request instead of acting on it immediately.
In our study, we draw on this distinction between secrecy techniques and demand management to structure our analysis of SSA users' privacy protective actions.
To understand individuals' rationales about privacy, we draw on the conceptual framework of contextual integrity (Nissenbaum, 2010).The framework proposes 'factors determining when people will perceive new information technologies and systems as threats to privacy' (Nissenbaum, 2010, p. 2).Specifically, it posits that a flow of personal information will be perceived as a privacy violation when context-relative informational norms (which we will simply call 'informational norms', 'norms' or 'the normative framework') are not respected.When considering if a norm is violated, four elements need to be considered: 'contexts, actors, attributes, and transmission principles' (Nissenbaum, 2010, p. 140).Social contexts are 'the backdrop for informational norms' (p.141).They define which informational norms are applied.Actors are information senders, recipients, and subjects.Information attributes are the types of information involved in the information flow, such as health information or student evaluations.Finally, transmission principles are constraints to the information flow between actors, such as conditions of confidentiality, access rights, or reciprocity.The four factors combine to create a normative framework.For instance, patient information being shared between healthcare providers in a healthcare context under confidentiality would usually not be considered a violation of informational norms.However, sharing this information with a technology company could be perceived as a normative violation.
Nissenbaum proposes an augmented framework, which includes 'context-based values, ends, and purposes' (p.181).These are the specific values, ends and purposes served by adopting a new information technology, such as improving physical wellbeing in healthcare contexts (Nissenbaum, 2010, Chapter 8).Given Nissenbaum's interchangeable use of context-relative values, ends and purposes, henceforth, we refer to them simply as 'context-relative purposes' or 'purposes'.Purposes are balanced against norms when considering the adoption of new technological devices.This balance determines whether a possible violation of normative contextual integrity will be perceived as an actual privacy violation.On a higher level, we understand context-relative purposes as an element of context-relative user agency, which we will call user agency.Figure 1 displays our understanding of Nissenbaum's theory.The first row contains the normative framework.The second row contains the elements of user agency, which are the contextrelative purposes in Nissenbaum's augmented model.
Although the contextual integrity framework was developed primarily for prescriptive application to evaluate technologies imposed on groups or societies, we use it for its descriptive value, to untangle privacy expectations and rationales (Nissenbaum, 2010, pp. 189-191).Others have also used it to explore privacy perceptions with smart speakers, a technology that is voluntarily adopted (Abdi et al., 2021;Apthorpe et al., 2018;Lutz & Newlands, 2021).Overall, these studies find that participants are less willing to share information that is generally considered more sensitive, like health data, location data, video and audio recordings.Additionally recipients, that are institutions, such as SSA companies, third-party companies, or governments, tend to be less acceptable than personal relations (e.g., friends, housemates, family) (Abdi et al., 2021;Apthorpe et al., 2018;Lutz & Newlands, 2021).These studies have made considerable contributions in scholarly understanding of contextual integrity in SSA use.However, they have generally limited their scope by using fixed-choice survey questions to explore different scenarios for the four elements of Nissenbaum's contextual integrity model.
We draw on contextual integrity to analyse user rationales about privacy with SSAs but show that there are additional factors contributing to when users perceive SSAs and similar technologies as privacy threats.

Methods
We use qualitative methods.To explore privacy work and rationales, we performed a thematic analysis of semi-structured, in-depth interviews with early SSA adopters.The interview data were collected as part of a larger study that explored uses of (Authors, 2020), and privacy perceptions with SSAs.Participants were recruited through a mix of opportunistic and snowball sampling (Bryman, 2012) using calls for participants in Facebook groups, subreddits and online forum of SSA users, as well as on one author's personal twitter account and through personal acquaintances.Interviews took place in summer 2018.The study received ethical approval (reference number: SSH OII C1A 18 030).The nine male and three female participants lived in the UK (8) or in the US (4) and had varying ages and self-declared technological proficiencies (7 with high, 1 with intermediate to high, 2 with intermediate, 2 with low proficiency) (see Appendix).Six participants were users of Alexa-enabled devices, five used Google Home devices, one used both.Participants used between one and eight SSAs, and five users were using these in connection with home automation.Interviews were recorded, transcribed and coded using NVivo.
Later interviews reached the point where little new knowledge was gained, suggesting that theoretical saturation was reached (Brinkmann & Kvale, 2015).However, interview participants volunteered for the study and we understand the risk of selection bias.In particular, the sample contained a majority of highly technologically proficient users (7 out of 12).
A theory-driven thematic analysis was performed to address RQ 1. Nippert-Eng's (2010) concept of 'privacy work' informed the analysis, serving as 'analytical objective' (Guest et al., 2012).A hybrid inductive-deductive approach was applied for the thematic analysis concerning RQ2 (Fereday & Muir-Cochrane, 2006), informed by Nissenbaum's (2010) contextual integrity framework.Themes were analysed on a semantic level (Braun & Clarke, 2006, p. 84) and determined based on a combination of repetitions, unfamiliar terms or unfamiliar uses of terms, metaphors and analogies, similarities and differences between interview responses, and missing data (Ryan & Bernard, 2003).To ensure research trustworthiness, we illustrate our findings with a large amount of original interview material.

Privacy work
Users described 11 forms of privacy work.Several of these have been previously identified, including the use of other devices in lieu of an SSA to complete certain tasks (Abdi et al., 2019), the careful placement of SSAs in the home (Pridmore & Mols, 2020;Zeng et al., 2017), the use of the review feature (Malkin et al., 2019), and the use of the mute button (Manikonda et al., 2017).We will not discuss these strategies further and focus on the seven new forms of privacy work.We categorise these into (a) five secrecy techniques, which can be further distinguished by the prevention of information flows toward corporate recipients on the one hand, and personal contacts on the other hand, and (b) two forms of demand management.Interestingly, even users who said they were not concerned about their privacy described doing privacy work.We discuss this issue below, after reporting the privacy work techniques of our participants.
Five new (a) secrecy techniques emerged from participants' accounts.Three attempt to limit information flows to institutional recipients, whereas two secrecy techniques prevent information from flowing to other people in their home, particularly guests.
First, users avoided giving the devices sensitive information.While Meng et al. ( 2021) found that users avoid giving unnecessary information, our users specifically mentioned avoiding giving SSAs data that they considered to be too sensitive.Users differed in whether they considered financial data to be too sensitive; health data was unanimously considered not appropriate to share with SSAs.Now, pertaining to data I don't necessarily want to share, like my finances, I use other systems to manage them (e.g., bank apps, mint, credit karma etc.).(Lucas) One user thought financial data were safe, but would not give health information: Second, one user wanted to limit corporate recipients of his voice data after he realised that Google had already stored years' worth of his voice data from his voice-to-text use on Android phones.He therefore decided to stick to the same company, Google, for his SSA use.While he perceived the discovery of the stored voice data by Google as a privacy violation, he dealt with this by preventing further dissemination of his voice data to other SSA companies.
To be honest, I've already done it so much that at this point, I'm just kind of like 'eh, they have so much data on me already' … which is again part of the reason I went with Google.They already have so much data on me anyway.(Andrew) A third measure to protect information from corporate recipients was to become familiar with privacy features on SSAs.Indeed, Alex explained that he had looked up how to delete his Alexa voice recordings I haven't researched it as much as I should.But … I can delete the data that Alexa's gathered on me.… I know where and how to go to do that.So I'm fine with having not fully done the research on whether or not someone can backdoor into listening on my device, but I know that of the data that Alexa gathers, I can go an delete that, so for now that gives me comfort.(Alex) Two other secrecy techniques were aimed at preventing information flows to other people in the home.First, users planned to use SSAs from different manufacturers in different parts of the home.Specifically, to limit access to personal information when guests could be in other rooms, Daniel ordered Google Home devices for his office because they did not communicate with the Alexa devices in other rooms.His concern was not about surveillance risks by SSAs.Instead, he focused on friends and visitors, who should not receive information about his diaries.Second, one user mentioned using voice profiles to limit visitor access to personal information.Previous research found that this in-built privacy feature is rarely used and typically unknown (Malkin et al., 2019;Meng et al., 2021).Lucas may be unusual.
However, I can see why some people might not like the idea of a guest coming into their home and then asking Google Home questions about owners' data.However, Google Home has voice recognition, so no one but me should be able to access any of my personal stuff.(Lucas) Users also used (b) demand management to prevent SSAs from interrupting their activities or intruding too much on their private lives.They reported two practices that helped them manage interruptions.
First, one user interrupted conversations until the device deactivates.The device still interrupts the conversation, but this technique reduces the length of the interruption.
If that light comes on and I'm not talking to her, I actually just stop talking until that light goes off rather than have her jump in and interrupt.And then she'll do the littlelike 'dudum' [deactivation noise] … and I'll carry on my conversation.… Myeyes shift around to her, and like 'she's listening now, let's not talk'then carry on.(Emily) SSAs allow other users to 'drop in' on them through their SSAs.For Eric, a drop-in attempt came at a moment when he wanted not to be interrupted.His response was to ignore the request for his attention.
I was having breakfastand my [Echo] Show suddenly … woke up on drop in.It was a Facebook friend who had just bought a Show and she was setting it up.But I totally ignored her talking to me.(Eric)

Privacy work without privacy concerns
One of the most interesting findings in our research is that several respondents do privacy work even though they claim they have no privacy concerns.Karen explained her lack of concern about her sole SSA in the kitchen (placement strategy): I mean for the most part I feel because I use it very limitedlythat we tend not to have very detailed confidential conversations at the kitchen table … As we've seen in previous quotes, Lucas indicated that he only provided SSAs with nonsensitive information: Now, pertaining to data I don't necessarily want to share, like my finances, I use other systems to manage them (e.g., bank apps, mint, credit karma etc.).
He also used voice profiles to pre-emptively counter inter-user privacy concerns: However, I can see why some people might not like the idea of a guest coming into their home and then asking Google Home questions about owner's data.However, Google Home's has voice recognition, so no one but me should be able to access any of my personal stuff.
And Alex and Eric regularly mute the microphone to prevent accidental interruptions: I turn off the mic in situations where I obviously don't want the device awakening based on me talking about that wakeword.(Alex) All of these users describe doing privacy work, but all also explained not having privacy concerns.One explanation is that they feel protected because they have done work to protect themselves and they feel less vulnerable.We will return to this issue in the Discussion.

Privacy rationales
Our respondents gave rationales for their lack of privacy concern.Users with some concerns explained why they were less concerned about other aspects of privacy.These rationales shed light on the manifold factors that determine users' perceived privacy threat from SSA use.Users' rationales fall into eight categories: The first four follow Nissenbaum's (2010) normative framework: (a) information attributes, (b) information actors, (c) transmission principles, and the (d) social context (Nissenbaum, 2010).Furthermore, (e) convenience represents a context-relative purpose of SSA use, that users weigh against perceived privacy threats.We find additional factors that influence whether SSAs are seen to threaten privacy.These are (f) expected information consequences, (g) users' privacy skills and awareness, and (h) the choice of SSA use.
First, regarding (a) information attributes, some were clearly aware that information has levels of sensitivity; they only used SSAs for non-sensitive information.
I don't think I've given them more than some fairly basic contact information.(Richard) I don't really have concerns about it hearing what's going on in my kitchen.(Michael) Second, regarding (b) information actors, some users trusted the manufacturers as information recipients.They reasoned that manufacturers' business interests and experience encouraged careful handling of sensitive data.This finding aligns with previous research (Lau et al., 2018;Zeng et al., 2017).News reports about manufacturers' protection of customer personal data and the lack of security breach reports also reassured some users about manufacturer's trustworthiness.Other users assumed that no third-party actors, like hackers would be able to receive their information.For some this was related to them as information subjectsthey assumed they had no information valuable enough to motivate surveillance or hacking.
No one is going to target you for a hack, unless you have a very specific enemy that is capable of doing that.You know, a company or larger corporationsbut an individual is not going to be targeted unless you're an inventor or something.(Andrew) Third, regarding (c) transmission principles, the rules of the data transfer reassured users.The visual recording signal was clear and they saw the transfer of recordings to company servers as necessary to provide the service.
I hear this from people all the time.'Oh my god, she is listening to you.' She's going to tell all my secrets or something.And my response to that is, well, first of all, she only communicates with Amazon once she hears the wakeword.… There is a light ring on the top that flashes and lets you know that … she's wanting to hear what commands to send, and she actually has to send those to Amazon for them to work.(Peter) I know that people have complained about the fact that it records your voice and stores it in your account.I'm not at all bothered about that, because I understand that it needs those samples of voices in order for the machine learning part of the voice recognition to get better.(Adam) Fourth, some users assumed that the (d) social context protects their information.One user expected that public scrutiny of SSA security ensures that users' data would be safe, while UK users mentioned the legal context that controlled data handling.
I know that I'm not the only person who would be keeping an eye on it.There are plenty of security experts around the world that will have hacked the thing to bits and had it on all sorts of traffic scanners and all sorts of things.(Adam) I think, very definitely -I would expect that they would inform me in line with the GDPR regulation about any uses that they wanted to make with the data.(Richard) Fifth, as in Lau et al. (2018), Pridmore et al. (2019), andZeng et al. (2017), some unconcerned users explained that they accepted the (e) trade-off between privacy and convenience.In contextual integrity terms, this suggests that users weigh the threat to informational norms against the context-relative purpose of convenience.
Sixth, lack of concern was related to (f) expected information consequences if others were able to access their recordings.Users considered the practical and emotional distress of a data breach to be minor.Although third-party access to potentially sensitive information would be a clear violation of informational norms, the minimal real-life impact led them to perceive SSAs as NOT a privacy threat.
Obviously the first thought people have is recording you, an audio of having sex in the bedroom or something like that.Like, if someone else got that, what's that gonna do with me?Like, I'd be like 'okhope you like it???' you know, like, in practical reasons, if I was famous maybe that would be different, but I'm just a dude … Like, what's that gonna do to me in a practical sense?… I'm not anybody that has sensitive stuff I would talk about around that.… So, there is nothing that I would be worried about being compromised.(Andrew) I've got nothing to hide.You know, if someone wants to turn the camera on and watch me naked walking through my house, let them do it!(Daniel) So, I'm not too worried about privacy.Ok, the Show in the bedroom, when I go to bed … if people wanna see me sleeping or going around the room and then, if that's their cheap thrill, then good riddance!(Eric) Seventh, (g) users' privacy awareness and skills play a role in their threat perception.Indeed, some users mentioned their potential lack of awareness of privacy risks as a reason for unconcern.
The concerns I have just been through the conversations with my husband and his friend who is an ex-IT worker.And she is very, very security-conscious when it comes to the internet.And if there hadn't been conversations with her, I think I'd have been naively oblivious to potential security risks.(Karen) I must admit it's that naïve acceptance.(Karen) I've heard people say 'It's like Big Brother listening in on you' but I can't understand how it does that.To me, it doesn't worry me at all.(Susan) However, in line with previous findings in the smart home (Zeng et al., 2017), more technologically proficient users felt skilled enough to understand and navigate the risks.
And I know there's a lot of people that continuously say 'oh it sits and listens all the time' and all of that.But the thing is that I have enough technical knowledge to know how to be able to check these things.And I know that that's clearly not the case.Soit's not something that bothers me in the slightest.(Adam) Eighth, several pointed to their agency: (h) using an SSA is a choice; no one is forcing them to use a device if they think it may violate their privacy or if, for instance, the goal of increased convenience does not outweigh the perceived risk.
The whole thing about intrusion is: if there's any fear of intrusion, don't go down that path anyway.(Daniel) So -I comment to them as 'I don't have anything to hide.And, you know, if you're that paranoid and you have something to hide, by all means, don't get one.'(Peter)

Contextual integrity: an expanded model for voluntary adoption of privacy-invasive technology in homes
The conceptual framework of contextual integrity was developed to understand the 'factors determining when people will perceive new information technologies and systems as threats to privacy' (Nissenbaum, 2010, p. 2), when new, potentially privacy-invasive technologies are imposed on a society or social groups.Our results suggest that Nissenbaum's normative framework and context-relative purposes also capture the way people think about privacy threats with SSAs in their households.But our results suggest more.People consider additional issues which leads us to propose an expanded model of contextual integrity.
This model adds three elements.In terms of the normative framework our results show that in addition to the four original elements people also consider a fifth factor: potential consequences.For instance, would images of the user in his bedroom cause embarrassment if a hacker obtained them?Could the user lose money if the device accidentally records them while working from home?While not every user might have the same response to these questions, the privacy rationales we identified in Andrew's, Daniel's and Eric's responses show that potential consequences play a role in their evaluation of the privacy threat in their SSA use.
Second, Nissenbaum discusses the context-relative purposes, which we referred to using the umbrella term of 'user agency'.We find that two additional user agency issues are empirically important with SSAs: choice, and privacy skills and awareness.
Choice is a complicated issue.As our respondents point out, putting an SSA into the home and using it is a matter of personal choice.But it is not an all-or-nothing decision.Users also choose the extent of their use.SSAs need not be ubiquitous.Users may choose to use an SSA solely in their kitchen, where they may not have private discussions, or for a limited range of requests, such as for music and recipes.On the other hand, they may have little choice if an SSA is imposed on them by other household members.Hence, the ability to choose the form and scope of SSA use will vary, and our findings show that this factor plays a role in users' evaluation of privacy threats.
Second, when we look at individuals' privacy perceptions, privacy awareness and skills become decisive.Specifically, individuals' skills and awareness are crucial to their ability to understand the scope of privacy threats.Many potential privacy issues are not obvious, such as possible corporate use of data.To perceive a threat, users must be aware that the threat is possible.This requires knowledge of how personal data are handled, what information can be inferred from the data, what safeguards are in place, and possible settings to modify the operation of the SSA to enhance privacy.Awareness and skills have a bidirectional impact: High privacy skills can reassure users by leading them to believe that their risk of a privacy breach is low.If users understand how recordings are transmitted, protected, used, or shared, then they may be able to adopt privacy protective strategies that meet their desired level of protection.Conversely, users with low privacy awareness may fail to recognise risks, or they may lack sufficient skill to be able to calibrate their protective strategies to meet their privacy expectations.Indeed, previous research shows that many users are unaware of inferences that can be drawn from voice data (Kröger et al., 2022) or of the privacy features built-into SSAs (Malkin et al., 2019).
In summary, privacy perceptions of SSAs should be understood in terms of two key context-relative categories: the normative framework and user agency.Five factorsinformation attributes, actors, transmission principles, contexts, and consequencesplay a role in determining whether an information flow leads to a prima facie violation of contextual integrity.Three user agency factorscontext-relative purposes, choice, and privacy skills and awareness further contribute to the privacy threat evaluation.Figure 2 shows how the new elements fit into our expanded model.While we developed this conceptual extension based on SSA use, we believe that it could be valuable to understand privacy threat perceptions for any voluntarily adopted, home information technology.Smart home technology remains an optionally used, emerging technology that expands the types of information transmitted from within the home, while also diversifying the context-relative purposes and requiring new privacy skills and awareness to balance its benefits against users' privacy expectations.

Discussion and conclusion
Table 1 summarises our results.The large number of new findings, marked in bold, indicate how much more is added by qualitative exploration of user privacy work and rationales.Below we discuss how our new findings relate to previously identified issues.
We identified five new secrecy techniques and two new demand management techniques.Three secrecy techniques were aimed at restricting information flows to corporate recipients, two were focused on protecting information from flowing to personal contacts in the home.
Privacy rationales were related to the normative framework plus context-relative purposes (Nissenbaum, 2010), as well as information consequences, privacy skills and  awareness, and choice.The high privacy skills rationale has been previously found in the smart home (Zeng et al., 2017).While previous studies have pointed out users often lack awareness of security risks (Zeng et al., 2017) and data storage policies (Malkin et al., 2019), our analysis shows that some users recognise that their lack of concern is due to their weak technical skills.Based on these findings, we have proposed an expanded model of contextual integrity for new, voluntarily adopted ITs in the home.Several points emerge for discussion.First, the usual assumption is that privacy attitudes influence protective behaviour (Smith et al., 2011).We find the reverse is also true.SSA users like Karen, Lucas, Alex, and Eric argue that their privacy work had been so successful they had no reason to be concerned.They felt that they had protected themselves sufficiently so that the risk of a privacy violation was small.This reverses the relationship in prior literature.From these users' point of view the logic is that they restricted the information flows to the device.Because of this they developed privacy rationales, typically that little information was being shared with the device or that only uninteresting information was shared.This meant they did not have to be concerned about privacy issues.
Hence, attitudes can influence actions, but actions can also influence attitudes.Detecting the direction of influence with typical cross-sectional social science data may be difficult.To understand the causal direction, future research requires longitudinal data, analysing the development of privacy concerns and behaviour from the moment of acquisition.
Second, while Nippert-Eng (2010) viewed demands for attention exclusively due to interruptions by humans, with SSAs the devices themselves may interrupt everyday activities, such as sleeping, or conversing with friends, when they react to sounds that they interpretcorrectly or incorrectlyas their wakeword.Although smartphone notifications about non-personal communication may be seen as a similar non-human interruption (Mehrotra et al., 2016), unintended interruptions by SSAs are different in that smartphone notifications can be pre-emptively, automatically filtered, e.g., by disabling certain notifications while keeping the primary purpose being reachableintact.To disable accidental SSAs activations, there is only the generic solution of muting the microphone, which also inactivates it for intentional commands, thus defeating the purpose of an SSA that responds to a user's verbal requests.
Finally, one might imagine that privacy work is related to age, gender, or other demographic attributes.Although our qualitative data are not a random sample, it is still interesting that no pattern emerges.Looking at privacy reasoning, however, one aspect stands out: female participants with low technological proficiency, and low use complexity tended to assume that they lack privacy skills and awareness, and that this could explain their lack of concern.In contrast, male participants with high or intermediate to high technological proficiency, and highly complex SSA uses tended to feel competent enough in controlling their privacy with SSAs.This finding resonates with previous research on gendered differences in self-assessed technical skills (Hargittai & Shafer, 2006) and suggests that these differences may extend to self-assessed privacy literacy.Future research could test users' privacy literacy (Trepte et al., 2015) and awareness of SSA privacy risks and related security threats to examine how these interact with privacy work and rationales, as well as with demographic characteristics.

I
guess I would trust her with that [financial] information.I'm trying to think what else -I think health information, probably not.I can't think of a context where -I mean fitness information yes, but actual health information not.(Emily)

I
want the Google[Home]  to use on a personal level as well, with diaries and things, where I haven't got the confidence with Amazon [Echo] … And I don't want my diaries shouted all around the house when I got visitors.With the Google [Home] … I only put it in … my office and probably another room, so it's a little more restrictive.(Daniel)

Figure 2 .
Figure 2. Expanded model of contextual integrity for new, voluntarily adopted technologies in the home.
Note: New findings are highlighted in bold.