Abstract
I aim to illustrate how the recommender systems of digital platforms create a particularly problematic kind of vulnerability in their users. Specifically, through theories of scaffolded cognition and scaffolded affectivity, I argue that a digital platform’s recommender system is a cognitive and affective artifact that fulfills different functions for the platform’s users and its designers. While it acts as a content provider and facilitator of cognitive, affective and decision-making processes for users, it also provides a continuous and detailed amount of information to platform designers regarding users’ cognitive and affective processes. This dynamic, I argue, engenders a kind of vulnerability in platform users, structuring a power imbalance between designers and users. This occurs because the recommender system can not only gather data on users’ cognitive and affective processes, but also affects them in an unprecedentedly economic and capillary manner. By examining one instance of ethically problematic practice from Facebook, I specifically argue that rather than being a tool for manipulating or exploiting people, digital platforms, especially by their underlying recommender systems, can single out and tamper with specific cognitive and affective processes as a tool specifically designed for mind invasion. I conclude by reflecting how the understanding of such AI systems as tools for mind invasion highlights some merits and shortcomings of the AI Act with regards to the protection of vulnerable people.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Digital platforms such as social networking sites, e-commerce platforms, and streaming and dating apps have brought people a wide array of opportunities to fulfill their goals and act as they desire. However, they also brought about many significant ethical challenges, having a great impact on the global economy (Srnicek 2017), creating new kinds of power imbalances (Zuboff 2020), and shifting shared moral values across different societal contexts (Sharon 2021). Given the increasingly relevant role of digital platforms for society, it is not surprising that legal frameworks and principles regulating the practices of digital platforms and their technologies are rising in number. The European Union is a leading example in this context, having introduced the Digital Markets Act (DMA) (Regulation 2022/1925), the Digital Services Act (DSA) (Regulation (EU) 2022/2065), and is soon to introduce the Artificial Intelligence Act (AI Act) (Forthcoming)Footnote 1. These regulatory frameworks share the concern to aptly moderate the social and market power of digital platforms, including aptly addressing the respect of citizens’ rights.
The goal of this paper is to inform the ethical and legal discussion of digital vulnerability, a concept that is quickly rising into prominence with regard to the regulation of digital and AI-based technologies. Specifically, I intend to explore the shaping of power relations between the users of digital platforms and their service providers, as mediated and determined by the design of the technology itself and the kinds of actions such technology affords. I will apply the theories of cognitive and affective scaffolding to social media platforms, which lend themselves as paradigmatic examples of cognitive technologies engineered to collect user data and modulate their affective processes (specifically, in order to acquire as much information on users as possible). The recommender systems underlying social media platforms thereby play a twofold role in social structures, for users, and information gatherers, for the platform’s designers. On the one hand, algorithms composing the recommender systems extend the users’ capacity for interacting with content and people, enabling new kinds of action. On the other hand, the recommender system also extends the designer’s capacity to investigate users’ beliefs, intentions, and desires, thus giving them access to the users’ mental states. Thanks to this twofold dynamic, service providers, through the recommender systems, have systematic access to the users’ minds – a situation that, in the age of ever-developing information and cognitive technologies, needs to be properly tackled by regulatory frameworks.
In Sect. 2, I will introduce the concept of digital vulnerability, tracing its philosophical history and highlighting its significance in contemporary digital consumer law. In Sect. 3, I introduce theories of cognitive and affective scaffolding, as well as their current applications with regards to digital technologies. In Sect. 4 I introduce the notion of mind invasion, intended as a kind of vulnerability exploitation that is engendered by the social and material structures the agent interacts with and lives in. In Sect. 5, I will provide an example of digital mind invasion through a controversial practice carried out by Facebook. In Sect. 6, I will propose the concept of digitally scaffolded vulnerability, to further investigate specific manners through which digital platforms like Facebook might exploit and instrumentalize people. In Sect. 7, I will explore how the recently passed European AI Act may fall short with regard to addressing the structural power imbalance characteristic of the relationship between users and service providers of social media (and potentially other) digital platforms.
2 Digital Vulnerability
The concept of vulnerability has an intricate history within philosophy, being particularly prominent within bioethics and feminist philosophy. A preliminary definition of vulnerability is the state of being sensitive to harm, injustice, and impediment to achieving one’s values and goals – in potentially autonomy-undermining ways. Within bioethics, the idea of vulnerability works as an important concept for research ethics guidelines. As Rogers (2013) points out, the idea that certain people are more likely to incur specific harms and wrongs in (clinical) research settings pervades research ethics guidelines such as the Belmont Report (National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, 1979), as well as legislation regulating scientific research (Rendtorff and Kemp 2019). More generally, vulnerability has been considered a key concept in welfarist (Goodin 1985), care (Kittay 1999), and feminist ethics (Mackenzie et al. 2013). Importantly, vulnerability has also been considered a prerequisite of autonomous action, as being vulnerable can be essential for the development of one’s own autonomy and dignity (Anderson 2013; Wiesemann 2017).
Despite its apparent relevance, especially in more applied disciplines such as bioethics (Luna 2019), vulnerability is characterized by an inherent tension in its definition and application. On the one hand, the concept of vulnerability seems to be relevant when applied to specific groups or individuals that, due to a position of subalternity or disenfranchisement when it comes to protecting themselves or their interests, are more vulnerable to physical, psychological, or other forms of harm. On the other hand, there is a sense in which all human beings are vulnerable, as, in virtue of our nature as finite, embodied, and temporally situated beings, we are sensitive to a variety of harms – a conception exemplified in the work of Gilson (2013). In other words, it is often unclear whether vulnerability is ethically salient only when applied to specific, minority groups (i.e., in a particularistic sense) or whether all human beings are vulnerable (i.e., in a universalistic sense).
Vulnerability plays an important role in consumer law, as some marketplace participants may both be more susceptible to harm by marketers’ practices or be unable to take full advantage of the marketplace’s opportunities (Basu et al. 2023; Pechmann et al. 2011) and is typically seen in a pluralistic sense (Hill and Sharma 2020). More generally, work by legal scholar Fineman (2010, 2013) endorses a universalistic understanding of vulnerability as a way to criticize neoliberal conceptions of the subject (as a citizen and as a consumer) as perfectly rational and independent from their social context. Both of these views play a role in the definition and discussion of digital vulnerability.
The concept of digital vulnerability is being employed to account for the unforeseen ways digital technologies can harm, exploit, or instrumentalize their users in ways that are engendered or aggravated by the technology’s design and functioning. As a recent report by the Organization for Economic Cooperation and Development (OECD, 2023) observes, several nations in the last few years have produced legislation aimed at protecting vulnerable citizens in digital contexts (see esp. pp. 34, 36). The introduction of the GDPR (Regulation (EU) 2016/679) in a European context has also been interpreted as addressing the vulnerability of platform users in the face of unfair data collection practices (Malgieri 2023; Malgieri and Niklas 2020). A lack of control over one’s information can lead to information being used against one’s interests and without consent – i.e., a lack of privacy can lead to an increase in vulnerability (Calo 2018). While the GDPR also acknowledges that user vulnerability derives from a power imbalance between the data controller (the digital platform as a market actor) and the consumer, its primary focus is the protection of vulnerable groups, especially children (Malgieri 2023). While there is an understanding within the field that the vulnerability of digital consumers is layered, comprising both aspects independent from and engendered by the digital platform, the GDPR and other regulations appear to prioritize the protection of individuals who are already deemed vulnerable.Footnote 2
In contrast to such a particularistic conception of vulnerability, Helberger et al. (2022) abide by a universalistic conception of vulnerability, insofar as the totality of vulnerable people (i.e., people who are at risk of the harms brought about by digital platforms) consists of the totality of platform users. They define digital vulnerability as “a universal state of defenselessness and susceptibility to (the exploitation of) power imbalances that are the result of increasing automation of commerce, datafied consumer–-seller relations, and the very architecture of digital marketplaces” (176). Similarly, DiPaola and Calo (2024) argue that the vulnerability of consumers is engendered by the service provider’s capacity to manipulate the platform as a technological and social environment – a kind of risk that, in principle, all online users are vulnerable to. Both these accounts, while acknowledging that digital vulnerability is engendered by a variety of social and technological factors, view it as a characteristic of platform users in general.
3 Theories of the Scaffolded Mind
I now turn to introduce two theories, grounded in philosophy of mind, that will inform my discussion of digital vulnerability for the remainder of the paper. These approaches belong to a family of theories in the philosophy of mind and the cognitive sciences that view cognition, affectivity, and the mind as always, to some extent, co-determined by the social and material environment of the agent (Newen et al. 2018). I will subsume under the label of theories of the scaffolded mind two separate but closely related theories, which are theories of scaffolded cognition and scaffolded affectivity.
Work from Menary (2007), Sterelny (2010, 2012), and Sutton (2010) provide examples of theories of scaffolded cognition, which view cognitive and decision-making processes as co-determined and shaped by “scaffolds”, i.e., aspects of agents’ social and material environment. As Sterelny (2010) points out, this kind of approach is grounded in strands of evolutionary biology that emphasize how the process of niche construction (i.e., environmental modifications made by organisms to facilitate their life and survival) had a significant contribution to the evolutionary development of our species, including cultural evolution (Laland et al. 2000; Sterelny 2012). Scaffolded cognition theorists have also been interested in the study of cognitive artifacts, intended as human-made physical objects to aid, enable, or improve cognition (Heersmink 2015; Hutchins 2014), with the potential of modifying and potentially enabling new cognitive capacities (Fasoli 2018).
In contrast, theories of affective scaffolding focus on the way our socio-material environment not only influences the development and realization of our emotions, feelings, and moods but can also be modified to favor making us feel in certain ways. Some authors have argued that in the same way we modify our own environment to enhance or enable cognitive processes and problem-solving activities, environmental modifications can also be applied to alter or moderate our own affectivity (Candiotto and Dreon 2021; Coninx and Stephan 2021) – sometimes claiming that emotions can also be scaffolded into artifacts in our environments (Piredda 2020). The idea is, that the development and experience of affective states are co-determined by our social and material environment. Importantly, this approach does not subsume cognition and affectivity as two separate domains of the mind, but rather as aspects of our mental life that, while distinguishable, in practice are intertwined.
Given the strong attention to artifacts within theories of the scaffolded mind, it is not surprising that theories of scaffolded cognition and affectivity have often been applied to digital platforms. Much research from within cognitive and affective scaffolding theorists has focused on the way digital platforms transform interpersonal relationships and information consumption behavior, in both negative (Alfano et al. 2020; Marin and Roeser 2020) and positive ways (Steinert et al. 2022) for platform users. However, despite the value of this research, I intend to show that a scaffolded mind approach is not just useful for understanding changes in information consumption behavior but also for analyzing more general ethical implications, and specifically our autonomy and vulnerability – something for which there is a strand of scaffolded mind research that is dedicated to, and that I will turn on next.
4 Mind Invasion and the Blurring of Mental Boundaries
Theories of the scaffolded mind analyze the way parts of our environment alter our cognitive, affective, and decision-making processes. In this sense, one of the most researched issues when it comes to the ethical, moral, and social applications of theories of the scaffolded mind is human autonomy. On the one hand, there is a clear sense in which certain social and material structures can play a significant role for enhancing and facilitating autonomous decision-making processes (Anderson and Kamphorst 2015; Heath and Anderson 2010). On the other hand, however, there has also been research investigating the way the human natural reliance on environmental modifications and the tendency to arrange the environment can have negative implications for people’s capacity to live and function autonomously. The most exemplary case of such research is the concept of mind invasion, introduced by Slaby (2016) to expand the typical understanding of agent-scaffold relationships in the philosophy of mind.
In order to understand the scope of the concept of mind invasion, it is important to underline the primary target of Slaby’s argument – what he calls the “user-resource” model of situated affectivity. According to such idea, an agent is, first and foremost, an active environmental engineer, fundamentally in control of the environment – they are a user, and the scaffold in the environment is a resource. In contrast, rather than seeing the scaffold as a mere resource the agent has control over, Slaby proposes that a socio-material structure that is built by agents and constructed to theoretically fulfill the agent’s needs and goals can, in fact, end up shaping the agent’s behavior and state of mind, even in potentially detrimental ways. In other words, the agent can be driven by their own environment contrary to their standing cognitive and affective states. Specifically, mind invasion occurs when, because of an agent adapting and attuning to a (social and material) environment, they make their affective life (which includes emotions, moods, and feelings…) be affected or shaped by that environment (and agents within it) in a way that actually goes against the agent’s preexisting goals and interests. Slaby specifically proposes the example of the corporate workplace as an environment whereby preexisting dynamics, including gossip, and the pressure for delivering results from their boss, might end up not only negatively affecting the life of a new employee, but also push them into adapting and complying to that niche through habituation and pressure rather than (rational) persuasion. If, according to the user-resource model, the agent is in charge of the environment, in cases of mind invasion the environment is “acting upon” the user: the capacity of the agent to adapt and purposefully modify their environment naturally comes with a receptivity to that environment and other agents within it.
Most relevantly for our discussion, as Kruger and Osler (2019) point out, this kind of dynamic can characterize not only institutions but especially niches that are significantly co-constituted by technological artifacts, such as Internet-based technologies. A social media platform is designed to enable you to express yourself, manifest your thoughts and preferences, and share them with other people, while also viewing and interacting with what other people share. Furthermore, the posts that enclose a person’s beliefs, feelings, and desires are objectified in the posts themselves, which are subject to quantifiable interactions and evaluations from others. A social media platform like Facebook or Instagram is designed for the behavioral patterns of sharing and evaluating information to become entrenched habits in their users, and it is thanks to that design and the otherwise impossible kinds of actions it affords that people attune themselves to the platform’s ecosystem. In this sense, mind invasion is enabled by the fact that the users of a social media platform are led to feel a certain way because they (unconsciously) attune themselves to the platform’s social and technological dynamics, and to the behavioral patterns that the platform lays out for them. As Valentini (2022) points out, in such cases, what appears to be – or rather, what starts as – a mere resource for a user ends up shaping the technology’s user into carrying out (affective) practices and habits that are, at least in the case of many digital platforms, encouraged by the technology’s very design.
To summarize, theories of cognitive and affective scaffolding highlight how the socio-material environment has a specific kind of implication on the agents who live in and interact within it. Niche construction involves both modifying the environment and adapting and attuning to the niche itself. Such attunement leads to a higher degree of vulnerability to various elements of our environment – a vulnerability whose ontogenesis and manifestations is grounded in the need to structure our life and socio-material environment through niche construction, and the characteristics of the environment itself – which can lead to mind invasion, i.e., when this dispositional vulnerability is exploited to go against the agent’s interests. In what follows, I will analyze how mind invasion is achieved specifically through an artifact, i.e., Facebook’s recommender system.
5 Facebook’s Recommender System as an Affective Artifact and a Tool for Mind Invasion
Before fully understanding the impact of Facebook’s recommender system on user vulnerability, it is necessary to characterize the recommender system as an artifact. What distinguishes Facebook’s recommender system, as well as other AI-based systems, from other kinds of artifacts is that it possesses a significant degree of adaptivity and initiative in their behavior (i.e., they do not require immediate coupling with a human agent in order to function) (Floridi and Sanders 2004; van de Poel 2020). A recommender system is specifically an AI-based technology whose purpose is the fulfillment of user preferences. As Burr et al. (2018) explain, recommender systems work through a three-step program. Firstly, they choose and recommend one within a set of possible actions that can bring forth interactions with users (be it a news item, a video, a product purchasable in an e-commerce platform). They then acquire information, based on the user’s choice, regarding their knowledge and preferences. Finally, the cycle repeats, with the recommender systems providing more apt possible actions based on the information previously acquired.
Importantly, recommender systems have an ambivalent function depending on the social role of the people who are interacting with them: in other words, recommender systems can be characterized as being in a multistable relationship with the specific agents that interact with them. The notion of multistability comes from the postphenomenological tradition in the philosophy of technology and denotes the fact that an agent might be in different kinds of relationships with a piece of technology. This kind of conceptualization has been consistently used to describe how a piece of technology can subtly shape power relationships between people and affect the autonomy and well-being of certain social groups. Instances of hostile architecture, such as benches with multiple armrests, may not be meaningful to certain groups of people but are prohibitive for people deprived of a fixed home, and can be discriminatory by design towards those people (Rosenberger 2014, 2017). A piece of technology can thus be used as a means to shape power relationships between groups of people or institutions, or to produce of inequality. Fully accounting for the multistable relationships with recommender systems that users and service providers have is fundamental in order to understand their uniqueness as cognitive and affective artifacts, and how they create new kinds of vulnerabilities in people.
5.1 Recommender Systems as Cognitive and Affective Scaffolds: Users’ and Service Providers’ Perspective
Taking the perspective of Facebook’s users, the recommender system plays what might be called an infrastructural role. Specifically, users aim to engage with other people and with content on the platform. Algorithms composing the recommender system thereby have the function of providing content that users are likely to engage with. In other words, the recommender system is a complement to the social, cognitive, and affective practices enabled by the platform. Such practices include not only those that would be possible, albeit very differently realized, even without the platform’s existence (e.g., showing appreciation of others, consuming information, discussing with others) but also practices that would not be possible otherwise (e.g., trolling, doom-scrolling).
It is important, here, to consider the duality of the recommender system as an affective and cognitive scaffold from the users’ perspective. That cognitive processes are not “purely rational” but rather non-detached and characterized by at least some degree of affective involvement is also relevant in the context of Facebook use: engagement with content is not a purely deliberative process, but is also driven by the user’s interests and preferences – a term that presupposes at least a degree of affective involvement. We might choose to engage with certain events, people, or information sources not only depending on how we feel before engaging with them but also because that content might make us feel in certain ways or alter our mood – they can (dys)regulate our affective processes. The information processing that occurs while using social media platforms like Facebook has also an affective component (Marin and Roeser, 2020).
The recommender system works as a preference fulfiller for the user, by providing them with content and information sources they are more likely to engage with. To do so, the recommender system decodes what kind of content a target user has engaged with in the past, as well as information about the user themselves, which might include both information the user provided willingly, or traces that the user might have left unreflexively.Footnote 3 To fulfill their function as content providers and cognitive and affective scaffolds, the algorithms composing the recommender system also need to process information regarding the user’s activities online. Arguably, rather than just seeing the recommender system as an extension of the user’s cognitive and affective life, the users themselves become the object of the recommender system’s information processing. However, while the recommender system is the primary processor of user-related information, it is not the only receiver: in fact, the totality of the information processed by the recommender system is then accessible by the platform’s service providers. The data collected from the algorithm serves not only to perfect the recommender system’s purpose of favoring user engagement, but also for the designers to enter in possession of it, and sell the data to third parties. In other words, for the service provider, the recommender system realizes the collection of user data, which is the core of the business model underlying the platform.
The recommender system is a cognitive and affective scaffold for the platform’s user, but also for the service providers. However, this extension has a quite different purpose for the two actors. For the platform’s users, the recommender system works as a “structurer” for information consumption behavior and interaction. For the service providers, the recommender system works as an instrument for the identification and extraction of information about the platform’s users – not just regarding their identity, but also regarding their desires, beliefs, preferences, and attitudes. In other words, the recommender system works as a lens for the service providers to look into users’ information processing behavior and social and affective life.
This technologically-mediated social dynamic can be best represented as a case of multistability. People might develop and maintain different kinds of relationships with a singular piece of technology depending on its purpose, the manner in which the technology is used, their level of skill in interacting with it, and their social role prior to the technology’s implementation. Users and service providers not only have a quite different experience of the recommender system, but the service provider can (to some degree) also interpret the recommender system’s behavior and identify it as an entity providing information to users. However, users do not experience the recommender system as such, but only the content the recommender system provides. Users cannot meaningfully interact with the recommender system because it does not appear as an entity of its own to them – and because of this, users do not have immediately available resources to understand the recommender system and its behavior, except for its general function. In contrast, service providers possess the skills, technological literacy, and material resources to actually interact with the recommender system, studying and (re-)programming it. This asymmetry is what determines the multistable relationship that users and service providers develop and maintain with the recommender system – and such asymmetry enables the service providers to observe users’ preference formation and information processing behavior, and to eventually influence such behavior by developing and adjusting the recommender system.
It is important, here, to understand how this multistable relationship highlights the subtlety of the power exerted by service providers. Through a scaffolded mind approach, the algorithm complements the users’ preference formation and fulfillment processes while acquiring data on them as they are developing in real-time. Firstly, this is enabled by the fact that social media platforms such as Facebook enable a wide variety of practices, including radically different ways of interacting with people and new kinds of actions altogether – such as scrolling through information indefinitely, Internet connection permitting. Secondly, and because these practices are intertwined with the technology’s use, users’ information, belief- and desire-formation, affective and decision-making processes – the processes that make up their mental life – are all structured by and developed throughout the interaction with the technology. Because of the intertwinement between the technology and the user’s cognitive and affective processes, the recommender system therefore structures cognitive and affective processes as they unfold in real time. As a lens for the service providers, the recommender system can provide a very close, intimate view of such processes: because of its role as a scaffold and an information-gatherer on such processes, the recommender system is a tool to observe and potentially influence, if not outright anticipate, these information consumption and preference formation as they unfold.
5.2 Social Media’s Recommender System as a Tool for Mind Invasion: Promoting Anger for Engagement on Facebook
Understanding the recommender system as a multistable cognitive and affective scaffold, whereby the technology enables an imbalanced relationship between users and service providers, can be generalizable to all digital platforms using a recommender system to infer and fulfill user preferences. Furthermore, while the imbalanced relationship can be potentially problematic, it is not immediately clear why it would be especially so in the case of this technology. However, there are two relevant things to note, here. Firstly, the user of a digital platform does not experience the recommender system as a distinct entity while using the platform. The impossibility to identify the recommender system as such should be emphasized: one might wear glasses and come to not pay attention to them as an item and just incorporate them into their everyday practices, but still have the resources and the possibility to interact with them as an entity of their own. Similarly, while I might be watched through a camera, I am still able to recognize the camera as such, and (inter)act (with it) according to my own needs and desires. In contrast, a platform user cannot choose to interact with the recommender system “directly”, as the platform’s interface just does not afford that kind of option – one can, at best, interact with the content the system provides.
Secondly, what makes this asymmetrical, technologically-mediated relationship more concerning are the different kinds of services that the different agents in the relationship have access to. For if the platform’s users have access to content and people to interact with (thanks to the recommender system’s activity), the service providers (through the same means) have access to the affective and decision-making processes of users as they unfold in real-time, without users experiencing being watched or necessarily realizing why potential changes in these processes occurred. In other words, in cases of mind invasion, the epistemological imbalance between users and service providers entails a specific kind of power imbalance, too. To illustrate an example of why such imbalance is problematic, I will analyze a controversial case surrounding Meta’s social media platform, Facebook, that emerged in 2021, through the conceptualization of Facebook as a multistable cognitive and affective scaffold.
I intend to study a controversy that emerged in October 2021, amid a series of leaked documents released by former Meta employee Frances Hauge and analyzed by the Wall Street Journal (WSJ) (Hagey and Horwitz 2021) which specifically highlights how the recommender system itself plays a key role in the exploitation of users. The company, which is the parent of platforms such as WhatsApp, Facebook, and Instagram, is far from new to controversial acts, which most notably include the selling of user data to the political consulting firm Cambridge Analytica without their consent (Cadwalladr and Graham-Harrison 2018). The case I will examine exemplifies the role of the recommender system as a tool for mind invasion from the service providers into the users’ cognitive, affective, and decision-making processes.
According to the WSJ’s investigation, Meta analysts noticed in 2019 that the content that was the most visible on Facebook, i.e., that was promoted the most by the recommender system, was primarily the one that caused the higher number of angry reactions. Specifically, Facebook introduced a set of five “reactions” (“love,” “haha,” “wow,” “sad” and “angry”) in 2016, alongside the original “like” button. It was later observed that not only the recommender system was promoting more content that caused “reactions” rather than content that caused more “likes”, but it was also theorized by company employees as early as 2017, and confirmed in 2019, that posts with angry reactions were the most promoted by the recommender system. The reason for this dynamic is not counterintuitive. From the service provider’s point of view, the goal is to gather as much user engagement as possible through higher engagement: as noted in 2018 by then Buzzfeed editor-in-chief, John Peretti, more controversial and polarizing content that would bring about (not necessarily civil) disagreement in people is more likely to bring more engagement. Apparently, while this concern was already voiced in 2017 and later confirmed, this dynamic was not addressed by the company beyond internal debate – that is, until whistleblower Frances Hauge leaked internal documents to the WSJ that substantiated the existence of the problem and the company’s executive inaction.
In a sense, this instance of Meta exploiting users through the platform for the purposes of engagement maximization, is nothing new. Many of the company’s controversial corporate policies (or, in some cases, lack thereof) were brought to light over the years. Specifically, platforms owned by the company were used for the purposes of manipulation (Susser et al. 2019) or instrumentalization (Jongepier and Wieland 2022) of their users, especially through explicit design choices (Parmer 2022; Schwengerer 2022). Many of these claims of manipulation explore how the service providers, the recommender systems, the interface design or all of the three combined work to undermine the autonomy and/or the well-being of the platform’s users. What I want to highlight here is something that can be generalizable to many of the controversies surrounding Meta and its platforms and, in this case, Facebook specifically.
From a scaffolded mind perspective, the recommender system is not just being utilized to instrumentalize people to maximize engagement, regardless of (or rather exactly by undermining) their wellbeing. What the recommender system is doing consists of deliberately making users feel a certain way regarding the information that they are processing, and framing and nudging interaction with other consumers of that content in light of the way users feel. As cognitive and affective scaffolds, the recommender system might appear as a “mere” provider of content and preference fulfiller, but this role is intrinsically value-laden, as it is driven by the maximization of engagement for data collection and profit for the service providers. Facebook’s recommender system is a scaffold supportive of the cognitive and affective processes of Facebook users, enabling the fulfilment f their preferences regarding socialization and information consumption. However, it is also creating the conditions for people to be made feel angry and potentially fight with others, likely against their initial intentions and their own best interests. In other words, Facebook’s recommender system works as a tool for mind invasion, an instrument for specifically tampering with users’ affective, cognitive and decision-making processes, while bypassing their attention and their capacity to critically engage with what makes them feel like that. Footnote 4
Furthermore, this kind of mind invasion is intentional. Krueger and Osler (2019) argue that Internet-based platforms are a versatile tool for affective scaffolding, given the relative ease we can stably access them (i.e., if there is Internet connection) and their versatility when it comes to actions and (social) practices they afford. These technologies are, in their words, very apt to engineer our affective life. This versatility can lead to systematic affective dysregulation: one may come to rely upon a platform like Twitter to always have access to news regarding current events and feel informed, to the point that lack of access to the platform might undermine this feeling to the point of experiencing insecurity and loss of control. Similarly, using Instagram to systematically share with others one’s own experiences can lead to feeling more susceptible to others’ opinions. In the Facebook case I have explored, the affectivity of users is indeed being engineered – not by the users themselves, but by the recommender system. Regardless of whether the service providers’ inaction was intentional or the result of negligence, the recommender system prioritized promoting engagement by altering the way people would feel without their prior consent and, very likely, against their own intentions when utilizing the platform. Through mind invasion, users become regarding the way they feel about who or what they are engaging with online, and about themselves throughout and as a result of such interaction.
6 Digitally Scaffolded Vulnerability: Recommender Systems as “Boundary-Blurrers” for Mind Invasion
So far, I believe it undisputable that Facebook’s practice unveiled by the WSJ is a case of mind invasion, which has a negative and ethically questionable impact on users’ wellbeing and capacity to independently formulate and pursue their desires. What does such a case mean for our understanding of vulnerability, as something that calls for moral attention? Vulnerability is a property of human agents, engendered both in their existential finitude and the structures and power relations that characterize their social environment. If external scaffolds can alter, enhance, and enable new cognitive and affective processes, then one might argue that this complementarity comes with risks. The development of new technologies capable of creating a new array of practices, such as social media platforms, can create vulnerabilities that are not simply due to the overreliance on a scaffold for the completion of a process. The scaffolding of a cognitive and affective process onto an artifact does not just entail that, were the artifact taken away, the agent’s capacity to function would be hindered – rather, the kinds of potential harm to an agent’s integrity and autonomy can go beyond a typical risk-benefits trade-off.
It is in this context that, I believe, one ought to reconceptualize the way we talk about autonomy and vulnerability in a way that can specifically characterize our relationship with cognitive and affective artifacts – and to do so through the concept of scaffolded vulnerability, intended as a kind of vulnerability intrinsic to the interaction with scaffolds enabling new forms of cognition and emotion. Cognitive and affective artifacts that can complement, extend, and transform those processes and, potentially, some of our capacities can, under those very same circumstances, expand not just the number, but the kinds of harms that they can be subjected to, given the entrenched and habitual use of those artifacts. In itself, the idea that human vulnerability is shaped by our relations with our social and material environment is, again not particularly innovative. Scaffolded vulnerability, specifically, is intrinsic to the expansion of our cognitive and affective processes into (socio-)material scaffolds, including technologies. Not only some vulnerabilities are specifically engendered by our relationship with technology: but such technologically engendered vulnerabilities can be taken advantage of by more technologically savvy people, if not the designers and providers of those technologies. The multistable relation between Facebook’s users, the recommender system, and the service providers exemplify a technologically-engendered vulnerability, where the artifacts’ designers are in a position of power over that technology’s users in terms of the capacity to observe and potentially tamper with not just their behavior, but their cognitive and affective processing. The vulnerability of Facebook users is not just circumstantial to the technology’s use, but specifically reinforced by the technology’s design – in our case the recommender system’s design as a preference fulfiller.
What is novel, here, is not simply that a given scaffold can engender some kinds of risks and threats that are specific to that scaffold’s nature: after all, the use of notebooks to remember our daily schedule makes us vulnerable to losing that information, if the notebook is destroyed. What is particularly worrying in this context is that the platform (inclusive of the recommender system) acts as a boundary-blurrer between my own mind and the mind of others, i.e., of the platform’s service provider. According to a more traditional, user-resource model of the human mind, the relations we build with the scaffolds we interact with are, at least prima facie, something we initiate and maintain. Intuitively, and in accordance with the “user-resource model”, the structures that we build and modify to enhance, regulate or enable our cognitive and affective processes are something belong: they participate into the actuation of our cognition, our decision-making, our affectivity. While our control over the effects of these scaffolds may vary in degree, we are the ones integrating them within our actions and routines. Importantly, the dynamic of mind invasion as such is not different from other “offline” examples of mind invasion. What is distinctive about this instance of mind invasion is its specific realization, the fact that it is possible to identify the “enactant” of the mind invasion as an artifact – an artifact embedded in a designed environment meant for people to integrate within their everyday practices and routines. What engenders the vulnerability of Facebook users to be invaded is a material scaffold, an identifiable object designed for the very purpose of decoding their mental states. The idea of mind invasion, in this context, can be taken literally, as the extent and detail by which not just a person’s information or behavior but their cognitive and affective functioning is observable and alterable in an otherwise un-achievable manner. As Galli (2022) points out, one of the dangers of recommender systems is not just the precise identification of existing vulnerabilities in people, but also of potential and future vulnerabilities they might develop that can be exploited – in line with the mechanisms of mind invasion.
To clarify what I am trying to convey, it is helpful to consider the account of the transparent self, developed by Lanzing (2016). This idea was developed to understand how self-tracking and data collection technologies may be both promoting and undermining autonomy, especially through the way they can potentially infringe upon users’ privacy. Through the possibility of datafying and quantifying information about our bodily and psychological states, as well as our social relationships, people can acquire higher degrees of understanding and control of themselves. However, such datafication can expose such information to an unspecified audience, which can undermine the boundaries of privacy that people need to live autonomously – an audience that can include the providers of that technologically-engendered service.
Despite the kinship between Lanzing’s concept and what I am calling digitally scaffolded vulnerability, there is an important difference between them. What her concept of transparent self brings forth is how information and characteristics (including regarding our mental states) regarding our selves can become accessible and easily interpretable. This can be done by ourselves and other people – including those we might not want to and who could utilize that information to exercise control or power we might not want to be subjected to. The transparency of our self, here, involves information about us. In contrast, what my analysis of Facebook’s online anger promotion brings forth is not simply that our emotions and feelings are being observed and acted upon by the recommender system. Rather, it is also the processes that produce and follow those states that are being observed and tampered with. It is a form of mind invasion that is unprecedently more granular and continuous than it would be possible through any non-digital means. Not only our “digital self” is transparent. Rather, some personal and intimate aspects of ourselves, such as our beliefs, intentions, desires and emotions are now measurable in ways that, without that technology, would be unachievable due to the computational limits of the human mind. In this sense, the recommender system is a tool specifically designed for pervasive and systematic mind invasion, and to establish a power imbalance between platform users and service providers by collecting information about as many cognitive and affective processes as possible.
The platform itself, in virtue of its user-friendliness and especially of the multistable role of the recommender system explored above, is a tool for the service provider to not just observe affective and decision-making processes happening in real time, but also tampering with them as they happen, bypassing the user’s rational capacities and conscious awareness while doing so. In other words, the platform’s interface design is meant to render people vulnerable, and the recommender system is the instrument by which such vulnerability is exploited. This kind of influence goes beyond the possibility to manipulate users: what is being manipulated are the cognitive and affective processes of the user, bypassing their conscious awareness– and this is a kind of exposure to harm, of vulnerability, that is unprecedented in terms of the kind of harm and the kind of power that can be exercised over people.Footnote 5
7 Digitally Scaffolded Vulnerability and the AI Act
I now turn to explore how the discussion of power imbalance between users and service providers thanks to AI-enabled mind invasion can inform the understanding of citizens’ rights in the digital society. I will briefly discuss the recently approved European “AI Act”, a regulation that seeks to lay down harmonized rules for developing, placing on the EU market and using AI systems of varying characteristics and risks. This regulation explicitly mentions vulnerability as a cause for concern and for specific actions requested by AI developers, and it is worth studying how vulnerability is therein conceived, and how my argument may influence the reading and effective scope of the regulation in this regard.
The AI act explicitly prohibits the development, launch, or use of AI systems that exploit the vulnerabilities of specific groups of people. Article 5(b) explicitly cites “age, disability, or a specific social or economic situation”, leaning on a reading of vulnerability as a characteristic of specific groups rather than a universal category. Furthermore, as Malgieri (2023) points out, article 7(1 F) of the Act draws an explicit connection between vulnerability and power imbalance between the (vulnerable) consumer and the service provider. The article specifically includes among the criteria to determine “High Risk AI Systems” whether the system engenders an imbalance of power, or whether it places people in a vulnerable position – in particular, “due to status, authority, knowledge, economic or social circumstances, or age” (ibid.). The article seems to align with a pluralistic conception of vulnerability, whereby vulnerability is a property of specific groups of people, although such conception is not explicitly stated. The kind of power imbalance engendered by the user-recommender system-service provider dynamic I have presented in Sect. 3 (i.e., the mind invasion of users through AI systems) could, theoretically, fall under the classification of “High-Risk AI System” and call for special attention from policymakers and the European Court.
However, the specific practice that is enacted by AI-engendered mind invasion may be interpreted in a slightly different way from the AI Act. A more apt way to characterize the aforementioned power dynamic, according to the definitions provided in Article 3, would be that of emotion recognition, i.e., “identifying or inferring emotions or intentions of natural persons on the basis of their biometric data” (Article 3(34)). However, the relevant sense of emotion recognition defined by the article is limited to biometric data, which article 3(33) defines as data “relating to the physical, physiological or behavioural characteristics of a natural person, such as facial images or dactyloscopic data”, and possibly meaningful gestures, tone of voice, or eye movement. Emotion recognition in the AI Act is limited to technologies processing biometric data from natural persons. However, as clear from the case I explored in Sect. 4.2, inferring emotions or intentions of users is possible even through other kinds of data, such as verbal and paralinguistic expressions (as exemplified in the case I have laid out in section 5.2).
Crucially, while in the case I have overviewed emotions are (supposedly) intentionally expressed by users (i.e., the users consented to let information about their own mental state to be known), there have been cases where the emotional states of people were inferred (Levin 2017) or actively manipulated (Kramer et al. 2014) t without their consent or awareness. It would not be surprising if inferring the emotional and mental states of users could be possible not just on other social media platforms (such as X, Instagram, or TikTok), but even on other kinds of AI-supported digital platforms platforms. It might be possible to infer some of the emotional states of a consumer by, say, viewing their recent history of consumption on an e-commerce platform. If the power imbalance I have described is supposed to call for special attention and safeguard, it is not entirely clear whether the specific enactment of the power imbalance – the systematic, capillary, and AI-engendered mind invasion practice as such – can be considered aptly covered or addressed in the AI Act. This is especially problematic, because the business model of digital platforms like Facebook (and even so for other platforms such as e-commerce platforms, or even dating apps) relies on the fulfilment of users’ preferences (in terms of content to interact with, in the Facebook case), preferences that are inferred by the recommender system.
What does this leave us with when it comes to consumer protection? On the one hand, one may apply Malgieri’s (2023) distinction of processing-based and outcome-based vulnerability in order to evaluate how to handle AI-engendered mind invasion. Processing-based vulnerabilities are, according to Malgieri, engendered during the collection and processing of users’ data, while outcome-based vulnerabilities occur as a result of that data processing. The case of mind invasion I have explored occurs during the data processing. Hence, according to Malgieri’s distinction it would be in that specific regard that users are vulnerable. However, it is arguable that the distinction may not suffice to capture the significance of AI-engendered mind invasion. Specifically, the reiterated interaction between users and the AI system would lead to rendering them predisposed to mind invasion more and more frequently. In cases of mind invasion, this is typical: the reason why the novice at the corporate job becomes more susceptible to mind invasion from the other workers, their boss, and the practices occurring therein is exactly due to their attunement to the workplace. There can be variations among individuals in terms of adaptivity to a given environment and, conversely, to be vulnerable to mind invasion. However, the environmental conditions that engender mind invasion are engineered to apply to an entire category of users – i.e., to all people who join the workplace in question. The same can be said for the case of AI-engendered mind invasion I explored in Sects. 4 and 5: through the platform’s design, users may become more and more attuned to the recommender system’s behavior and become categorically vulnerable to mind invasion through the AI system.
My proposal for understanding vulnerability in certain digital platforms as digitally scaffolded leans towards an understanding of digital vulnerability close to that of Helberger et al. (2022) and diPaola and Calo (2024), intended as a condition that (potentially) affects all platform users by design. In this sense, while the AI Act constitutes a step forward in the protection of vulnerable citizens, what needs to be acknowledged is that the platforms and their underlying AI-based architecture are causing a power imbalance – and specifically a kind of mind invasion – that should invite for adequate reflection on the part of policymakers. There is arguably an existing understanding that these technologies are not morally neutral. For instance, the Digital Services Act (DSA), article 67 (Regulation (EU) 2022/2065), explicitly addresses dark patterns and prohibits their implementation within a platform’s interface – i.e., the DSA implies that some technological configurations are inherently manipulative and, hence, to be prohibited. However, it is apparent, in my view, that if citizen and consumer vulnerability must be addressed from a policy-maker’s perspective, there is additional work to be done with regards to AI-engendered vulnerability. It is necessary to aptly conceptualize the distinct power relationship between platform users and service providers, which is based on the latter’s capacity to access the former’s mind through AI systems. The argument I have presented here constitutes an effort in that direction.
8 Conclusion
In this paper, I have argued for the concept of digitally scaffolded vulnerability, intended as a kind of technology-specific vulnerability that specifically regards the way we interact with digital technologies that can break through the boundaries of our minds. This conceptualization is grounded in new discussions surrounding vulnerability in the age of digital platforms, and theories of how material scaffolds and technologies can complement, transform, and enable cognitive and affective processes – and not always to the benefit of their users. Digital platforms are technologies designed to be smoothly integrated within people’s practices and routines – designed to become stably integrated scaffolds for their cognitive, affective, and decision-making processes. While many negative ethical implications of these technologies have been identified and drove significant action from policymakers – including bringing attention to how the design of these technologies can undermine autonomy and exploit people – an analysis based on theories of scaffolded cognition leads to a more threatening reading of these technologies’ impact on people. The kind of vulnerability that is engendered by digital platforms, given both their user-friendly interface and the hidden activity of the recommender systems underlying them, is noteworthy because it regards the most intimate and private aspects of the human mind. Given that this is a kind of power relation that has no clear precedent, I believe it is necessary to explicate what it is for society to aptly address.
Data Availability
Not applicable.
Code Availability
Not applicable.
Notes
At the time of publication of this article, the AI act has passed but not yet come into force. For this reason, the text of the Act, which is in its approved and final form, is available only as a “Proposal for a Regulation” rather than as a Regulation. For this reason, it is cited as “Forthcoming” in-text, and in the References section as “Proposal for a Regulation (EU) of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts.”
I will discuss the European Digital Services Act and the AI Act regarding the exploitation of consumers’ vulnerability in Sect. 6.
For instance, someone might look at a video or a photo on a social media platform slightly longer than average, and the recommender system can pick up on that kind of unreflexive action as potentially meaningful for its purpose, and provide content similar to that which caused the elongated gaze.
Arguably, the practice of mind invasion I examined here would have been, at least in principle, morally problematic even if the recommender system would have promoted “likes” or “haha” rather than “angry” reactions. The reason for that is, the service providers utilize the recommender system to instrumentalize users as a source of data by making them feel in a certain way. The fact that, in the case exposed by the WSJ, anger was the reaction promoted by the recommender system to maximize engagement is quite egregious in showing what the interests of the service providers, and the function of the recommender system, are. I thank an anonymous reviewer to bring this point forth.
While there may be ways for someone to exercise control and harm over another’s cognitive and affective processes specifically (brainwashing might be a fitting example), the existence of a technology that can specifically enable such practice relatively economically is noteworthy, to say the least.
References
Alfano M, Fard AE, Carter JA, Clutton P, Klein C (2020) Technologically Scaffolded Atypical Cognition: The Case of YouTube’s Recommender System. Synthese, 1–2, 1–24. https://doi.org/10.1007/s11229-020-02724-x
Anderson J (2013) Autonomy and Vulnerability Entwined. In C. Mackenzie, W. Rogers, & S. Dodds (Eds.), Vulnerability: New Essays in Ethics and Feminist Philosophy (pp. 134–136). Oxford University Press. https://doi.org/10.1093/acprof:oso/9780199316649.003.0006
Anderson J, Kamphorst BA (2015) Should Uplifting Music and Smart phone apps count as willpower doping? The Extended Will and the Ethics of enhanced motivation. Am J Bioeth Neurosci 6(1):35–37. https://doi.org/10.1080/21507740.2014.995321
Basu R, Kumar A, Kumar S (2023) Twenty-five years of consumer Vulnerability Research: critical insights and future directions. J Consum Aff 57(1):673–695. https://doi.org/10.1111/joca.12518
Burr C, Cristianini N, Ladyman J (2018) An analysis of the Interaction between Intelligent Software agents and human users. Mind Mach 28(4):735–774. https://doi.org/10.1007/s11023-018-9479-0
Cadwalladr C, Graham-Harrison E (2018), March 17 Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach. The Guardian, 17(1), 22. Retrieved from: https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election
Calo R (2018) Privacy, Vulnerability, and Affordance. In E. Selinger, J. Polonetsky, & O. Tene (Eds.), The Cambridge Handbook of Consumer Privacy (pp. 198–206). Cambridge University Press. https://doi.org/10.1017/9781316831960.011
Candiotto L, Dreon R (2021) Affective scaffoldings as habits: a Pragmatist Approach. Front Psychol 12:945. https://doi.org/10.3389/fpsyg.2021.629046
Coninx S, Stephan A (2021) A taxonomy of environmentally scaffolded Affectivity. Dan Yearb Philos 54(1):38–64. https://doi.org/10.1163/24689300-bja10019
DiPaola D, Calo R (2024) Socio-Digital vulnerability. SSRN Electron J 1–12. https://doi.org/10.2139/ssrn.4686874
Fasoli M (2018) Substitutive, complementary and constitutive cognitive artifacts: developing an Interaction-centered Approach. Rev Philos Psychol 9(3):671–687. https://doi.org/10.1007/s13164-017-0363-2
Fineman M (2010) The vulnerable subject and the responsive state. Emory Law J 60:251–275. https://scholarlycommons.law.emory.edu/elj/vol60/iss2/1
Fineman MA (2013) Equality, Autonomy, and the vulnerable subject in Law and Politics. In: Fineman M, Grear A (eds) Vulnerability: reflections on a new ethical foundation for law and politics. Routledge, New York, pp 13–27
Floridi L, Sanders JW (2004) On the morality of Artificial agents. Mind Mach 14(3):349–379. https://doi.org/10.1023/B:MIND.000003546163578.9d
Galli F (2022) Algorithmic Marketing and EU Law on unfair Commercial practices. Springer, Cham. https://doi.org/10.1007/978-3-031-13603-0
Gilson E (2013) The Ethics of vulnerability: a Feminist Analysis of Social Life and Practice. Routledge, New York
Goodin RE (1985) Vulnerabilities and responsibilities: an ethical defense of the Welfare State. Am Polit Sci Rev 79(3):775–787 Cambridge Core. https://doi.org/10.2307/1956843
Hagey K, Horwitz J (2021), September 15 Facebook Tried to Make its Platform a Healthier Place. It Got Angrier Instead. The Wall Street Journal, 16. Retrieved from: https://www.wsj.com/articles/facebook-algorithm-change-zuckerberg-11631654215
Heath J, Anderson J (2010) Procrastination and the Extended Will. In: Andreou C, White MD (eds) The thief of Time. Oxford University Press, pp 233–253. https://doi.org/10.1093/acprof:oso/9780195376685.003. 0014
Heersmink R (2015) Dimensions of integration in embedded and extended Cognitive systems. Phenomenology Cogn Sci 14(3):577–598. https://doi.org/10.1007/s11097-014-9355-1
Helberger N, Sax M, Strycharz J, Micklitz H-W (2022) Choice architectures in the Digital Economy: towards a New understanding of Digital Vulnerability. J Consum Policy 45(2):175–200. https://doi.org/10.1007/s10603-021-09500-5
Hill RP, Sharma E (2020) Consumer vulnerability. J Consumer Psychol 30(3):551–570. https://doi.org/10.1002/jcpy.1161
Hutchins E (2014) The Cultural Ecosystem of Human Cognition. Philosophical Psychol 27(1):1–16. https://doi.org/10.1080/09515089.2013.830548
Jongepier F, Wieland J (2022) Microtargeting people as a Mere means. In: Jongepier F, Klenk M (eds) The Philosophy of Online Manipulation. Routledge, New York, pp 156–179. https://doi.org/10.4324/9781003205425-10
Kittay EF (1999) Love’s labor: essays on women, Equality and Dependency. Routledge, New York
Kramer ADI, Guillory JE, Hancock JT (2014) Experimental evidence of massive-scale emotional contagion through Social Networks. PNAS Proc Natl Acad Sci United States Am 111(24):8788–8790. https://doi.org/10.1073/pnas.1320040111
Krueger J, Osler L (2019) Engineering Affect: emotion regulation, the internet, and the Techno-Social Niche. Philosophical Top 47(2):205–231. https://www.muse.jhu.edu/article/774363
Laland KN, Odling-Smee J, Feldman MW (2000) Niche Construction, Biological evolution, and Cultural Change. Behav Brain Sci 23(1):131–146. https://doi.org/10.1017/S0140525X00002417
Lanzing M (2016) The transparent self. Ethics Inf Technol 18(1):9–16. https://doi.org/10.1007/s10676-016-9396-y
Levin S (2017), May 1 Facebook Told Advertisers It Can Identify Teens Feeling ‘Insecure’ and ‘Worthless’. The Guardian. Retrieved from: https://www.theguardian.com/technology/2017/may/01/facebook-advertising-data-insecure-teens
Luna F (2019) Identifying and evaluating layers of vulnerability – a way forward. Dev World Bioeth 19:86–95. https://doi.org/10.1111/dewb.12206
Mackenzie C, Rogers W, Dodds S (2013) Vulnerability: New essays in Ethics and Feminist Philosophy. Oxford University Press
Malgieri G (2023) Vulnerability and Data Protection Law. Oxford University Press, Oxford. https://doi.org/10.1093/oso/9780192870339001.0001
Malgieri G, Niklas J (2020) Vulnerable data subjects. Comput Law Secur Rev 37:105415. https://doi.org/10.1016/j.clsr.2020.105415
Marin L, Roeser S (2020) Emotions and Digital Well-Being. The Rationalistic Bias of Social Media Design in Online deliberations. In: Burr C, Floridi L (eds) Ethics of Digital Well-being: a Multidisciplinary Approach. Philosophical Studies Series, vol 140. Springer, Cham, pp 139–150. https://doi.org/10.1007/978-3-030-50585-1_7
Menary R (2007) Cognitive integration: mind and cognition unbounded. Palgrave-Macmillan
National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research (1979) The Belmont report: ethical principles and guidelines for the protection of human subjects of research. U.S. Department of Health and Human Services. https://www.hhs.gov/ohrp/regulations-and-policy/belmont-report/read-the-belmont-report/index.html
Newen A, De Bruin L, Gallagher S (2018) The Oxford handbook of 4E cognition. Oxford University Press
OECD (2023), Consumer vulnerability in the digital age. OECD Digital Economy Papers, No. 355, OECD Publishing, Paris. https://doi.org/10.1787/4d013cc5-en
Parmer WJ (2022) Manipulative design through Gamification. In: Jongepier F, Klenk M (eds) The Philosophy of Online Manipulation. Routledge, New York, pp 216–234. https://doi.org/10.4324/9781003205425-13
Pechmann C, Moore ES, Andreasen AR, Connell PM, Freeman D, Gardner MP, Heisley D, Lefebvre RC, Pirouz DM, Soster RL (2011) Navigating the Central tensions in Research on At-Risk consumers: challenges and opportunities. J Public Policy Mark 30(1):23–30. https://doi.org/10.1509/jppm.30.1.23
Piredda G (2020) What is an affective artifact? A further development in situated affectivity. Phenomenology Cogn Sci 19(3):549–567. https://doi.org/10.1007/s11097-019-09628-3
Proposal for a Regulation (EU) of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts. Retrieved from: https://data.consilium.europa.eu/doc/document/ST-5662-2024-INIT/en/ pdf
Regulation (EU) (2022) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market For Digital Services and amending Directive 2000/31/EC (Digital Services Act) (Text with EEA relevance)October 27, ELI: http://data.europa.eu/eli/reg/2022/2065/oj
Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April (2016) 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (Text with EEA relevance), (April 27, ELI: http://data.europa.eu/eli/reg/2016/679/oj
Regulation (EU) 2022/1925 of the European Parliament and of the Council of 14 September 2022 on contestable and fair markets in the digital sector and amending Directives (EU) 2019/1937 and (EU) 2020/1828 (Digital Markets Act) (Text with EEA relevance) (September 14, 2022). ELI: http://data.europa.eu/eli/reg/2022/1925/oj
Rendtorff JD, Kemp P (2019) Four Ethical Principles in European Bioethics and Biolaw: Autonomy, Dignity, Integrity and Vulnerability. In E. Valdés & J. A. Lecaros (Eds.), Biolaw and Policy in the Twenty-First Century: Building Answers for New Questions (pp. 33–40). Springer International Publishing. https://doi.org/10.1007/978-3-030-05903-3_3
Rogers W (2013) Vulnerability and Bioethics. In: Mackenzie C, Rogers W, Dodds S (eds) Vulnerability: New essays in Ethics and Feminist Philosophy. Oxford University Press, pp 60–87
Rosenberger R (2014) Multistability and the Agency of Mundane artifacts: from speed bumps to Subway benches. Hum Stud 37(3):369–392. https://doi.org/10.1007/s10746-014-9317-1
Rosenberger R (2017) Callous objects: designs against the homeless. University of Minnesota: Minneapolis
Schwengerer L (2022) Promoting Vices: Designing the Web for Manipulation. In F. Jongepier & M. Klenk (Eds.), The Philosophy of Online Manipulation (pp. 292–310). Routledge. https://doi.org/10.4324/9781003205425-18
Sharon T (2021) From hostile worlds to multiple spheres: towards a normative pragmatics of justice for the googlization of health. Med Health Care Philos 24(3):315–327. https://doi.org/10.1007/s11019-021-10006-7
Slaby J (2016) Mind Invasion: situated affectivity and the corporate life hack. Front Psychol 7:266. https://doi.org/10.3389/fpsyg.2016.00266
Srnicek N (2017) Platform capitalism. Cambridge, Polity
Steinert S, Marin L, Roeser S (2022) Feeling and thinking on Social Media: emotions, affective scaffolding, and critical thinking. Inquiry 1–28. https://doi.org/10.1080/0020174X.2022.2126148
Sterelny K (2010) Minds: extended or scaffolded? Phenomenology Cogn Sci 9(4):465–481. https://doi.org/10.1007/s11097-010-9174-y
Sterelny K (2012) The Evolved apprentice: how Evolution made humans unique. The MIT, Cambridge, Mass
Susser D, Roessler B, Nissenbaum H (2019) Technology, Autonomy, and Manipulation. Internet Policy Rev 8(2):22. https://doi.org/10.14763/2019.2.1410
Sutton J (2010) Exograms and Interdisciplinarity: history, the extended mind, and the civilizing process. In: Menary R (ed) The extended mind. MIT Press, pp 189–225. https://doi.org/10.7551/mitpress/9780262014038.003.0009
Valentini D (2022) Expanding the perspectives of affective scaffoldings: user-resource interactions and mind-shaping. Digit Environ Thaumàzein 10(1):188–216. https://doi.org/10.13136/thau.v10i1.148
van de Poel I (2020) Embedding values in Artificial Intelligence (AI) systems. Mind Mach 30(3):385–409. https://doi.org/10.1007/s11023-020-09537-4
Wiesemann C (2017) On the interrelationship of vulnerability and trust. In: Straehle C (ed) Vulnerability, autonomy, and Applied Ethics. Routledge, New York, pp 157–170
Zuboff S (2020) The age of surveillance capitalism: The fight for a human future at the new frontier of power. Public Affairs: New York
Funding
Open Access funding enabled and organized by Projekt DEAL. This research has been supported by the Cluster for Future NeuroSys (grant number: 03ZU1106EA) of the German Federal Ministry for Education and Research (Bundesministerium für Bildung und Forschung (BMBF)).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Ethical Approval
Not applicable.
Consent to Participate
Not applicable.
Consent to Publish
Not applicable.
Conflicts of interest/Competing Interests
The authors declare no competing or conflicting interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Figà-Talamanca, G. Digitally Scaffolded Vulnerability: Facebook’s Recommender System as an Affective Scaffold and a Tool for Mind Invasion. Topoi (2024). https://doi.org/10.1007/s11245-024-10051-w
Accepted:
Published:
DOI: https://doi.org/10.1007/s11245-024-10051-w