Research on social media algorithms has been thriving in recent years. Within this context, the notion of algorithmic imaginaries (Bucher 2017) has been affirmed as an important heuristic to counter the widespread lack of knowledge about these affordances. This explores the mosaic of feelings and perceptions of users in their everyday navigation practices, to better understand how these algorithmic systems work. We shall be especially concerned with recommendation systems—i.e., algorithmically based systems that highlight recommended content for users, such as Facebook’s Feed (formerly known as News Feed), YouTube’s Related Videos, or TikTok’s For You Page. The notion of algorithmic imaginaries has been very helpful in promoting a reflexive approach to the study of social media affordances. Nonetheless, the relationship between algorithmic imaginaries and users’ subjective engagement, considering the highly personalised logics of content circulation on these platforms, demands further expansion.

To this end, we hereby introduce the notion of the algorithmic other. We argue that while algorithms operate a form of subjectification in their construction as digital and data subjects, human users subjectivise the algorithms in return, negotiating their own agency in relation to the perceived qualities and functioning of these affordances. In so doing, users engage in a process of ‘othering’ of the algorithm(s), in an effort to differentiate their agency from that of 'the algorithm'. This, we contend, represents the by-product of the intense personalisation of users’ own everyday online experience, and constitutes the other side of the algorithmic imaginaries that they produce. Based on small-scale qualitative research consisting in a set of 6 focus groups with social media users held in Milan (Italy) in the period April–July 2021, involving a total of 33 participants, we illustrate the main dimensions of this process as discussed by our interviewees, showing how they stretch the relationship between the human and the machine in their subjective narrations of algorithmic everyday experiences and, in so doing, put centre stage their (supposed) agency vis-à-vis the machinic dimension of social media (and algorithmic) affordances.

Algorithmic imaginaries: a contextualisation

Research in critical media studies over the last decade has highlighted the process of ‘platformization’ of cultural production and consumption, which is argued to be a reconfiguration of the actors and power structures of the cultural economy (Poell et al. 2021). Social media algorithms, including those recommendation systems identified above, are a key component in this process. As key affordances of social media infrastructures, these allow the maintenance of a business model organized around datafication and content personalization (Pybus and Cotè 2021; Kant 2020; see also Skeggs and Yuill 2019) that is aimed at the prediction of user behaviour and the monetisation of targeted advertising – what has been labelled as ‘surveillance capitalism’ (Zuboff 2019). It has been highlighted that researchers and the general public get to know very little about how these algorithmic systems actually work, and how their outputs are generated. This has been described through the ‘black box’ metaphor, which has come to quintessentially represent the ‘unknowability’ of algorithms (Pasquale 2015). More recently, however, a few scholars have underlined that, precisely because of the structural integration of algorithmic systems in social, cultural, and economic processes, the unknowability of algorithms as technical objects must not represent a limit to the ability of researchers to probe algorithmic interventions (Seaver 2017; Christin 2020; Bonini and Gandini 2020). The invitation to go beyond the ‘black box’ has been brought forward in particular by Bucher (2017, 2018), who has argued that, while it is certainly important to draw attention on algorithms as unknowable technical objects, this represents ‘somewhat of a distraction’ from focusing on the social, cultural, and economic processes within which they operate.

Pivotal to this call by Bucher is the notion of algorithmic imaginaries (2017). This grasps how users develop important knowledge about how social media algorithms work, based upon their own everyday experiences of use. This, Bucher argues, represents a rich set of information that researchers must employ in their attempt at better understanding the workings of algorithmic systems. Informed by the concept of ‘socio-technical imaginaries’ by Jasanoff and Kim (2015), and actor-network theory (Latour 2005), the notion of algorithmic imaginaries focuses on the feelings, moods, and sensations that users express towards algorithms, promoting a reflexive approach towards them. This notion has been pivotal for the development of the research field known as ‘critical algorithm studies’, where other research—such as, among others, ‘algorithmic folk theories’ (Siles et al. 2020; Ytre-Arne and Moe 2021) and ‘algorithmic gossip’ (Bishop 2019)—shares a common focus on users’ perception and agency towards algorithms (cf. Lomborg and Kapsch 2020).

However, while recognising the relevance of the notion of algorithmic imaginaries, we maintain that the intensely subjective experience of use that characterises these platforms, which derives in particular from algorithmic recommendation systems, must be taken into better consideration when analysing said algorithmic imaginaries. Through personalisation—i.e., “the computational tracking and anticipation of users’ preferences, movements and identity categorisations to algorithmically intervene in users’ daily experiences” (Kant 2020, p.10)—algorithms are afforded the power to decide what is personal to users (ibid., emphasis in original), that is, they demand users to position themselves in between privacy concerns and the ‘resignation’ (Draper and Turow 2019) to how these systems work (Ytre-Arne and Moe 2021). The ‘recursive inclusion’ of users in classes and groups that recommendation algorithms operate to personalise users’ online experiences, as Lury and Day (2019) discuss, may be seen as a peculiar mode of individuation—in a Simondonian sense—that inevitably involves generalisation. This, they argue, determines that an individual in a datafied environment is never one person, but many, and that “the familiar recognition that personalization seems to provide – knowing you better than you yourself do (…) constrains who and how we can be” (Lury and Day 2019, p.18) on these platforms (cf. also Prey 2018, who raises a similar argument in relation to music streaming services).

Building upon these insights, we argue that algorithmic personalisation creates a tension between the subjective relationship that each user develops with recommendation algorithms in their everyday use, on the one hand, and the realisation that their subjectivity is somewhat constrained, neither fully expressed nor adequately negotiated vis-à-vis algorithmic operations and their aims, on the other. These considerations prompt some specific research questions: how do users’ subjective experiences of algorithmic recommendation systems (and the imaginaries that derive from them) clash with individual expectations regarding how content should be circulating online? How do said imaginaries affect users’ subjective positioning and reflexive engagement with social media use, content circulation, recommendation, and access? And finally: to what extent is the role of users’ subjectivity given recognition in the creation of certain imaginaries vis-a-vis the influence of algorithmic personalisation? This article aims at addressing these questions, by introducing the notion of the algorithmic other.

The algorithmic other

As argued by Wark (2019, p. 65), “the Internet has become one of subjectivity’s major contemporary sites”. The digital space, Wark (2019, p. 65) continues, “puts subjectivity into circulation in and as data—and in the circulation of data, subjectivity is subject, in turn, to technical processes that invite alternate conceptions of what the subject is and how it becomes” (ibid.). Algorithmic recommendation systems on social media platforms hold a central role in this process, being the ‘engines of order’ (Rieder 2020) of content circulation that create highly personalised experiences of use (Kant 2020). Throughout this process, user subjectivities are both represented and constituted by ‘algorithmic identities’ (Cheney-Lippold (2011), that are recursively produced in the data-saturated economy of surveillance capitalism (Armano et al. 2022). Algorithms, in other words, produce ‘digital subjects’ that are optimised to be the targets of personalised content recommendation and advertising (Goriunova 2019). This, as Armano et al. (2022, p. 8) suggest building on Deleuze and Guattari (1987), raises a question of agency concerning the human–machine relationship: platforms, they argue, enable the subjectivation of users as “identified subjects and identifying objects” whose individuality “is understood in a processual sense, since it is partly acquired and partly constructed by the process of algorithmic ‘individuation’” (a similar argument is raised by Wark 2019).

Yet, if we accept the argument that users, as digital subjects, occupy a mediating position between the human and the data they produce, acting as technical ‘subjects of circulation’ (Wark 2019), then the notion of algorithmic imaginaries shows us that they also reflexively engage in this process, identifying algorithms as their subjective counterpart. Put differently, as a result of the intensely personalised experience they are immersed in, users make algorithms—such as social media recommendation systems—the object of a counter-subjectivation process. As shown by Bucher as well as, among others, by Swart (2021) and Schellewald (2022), users produce subjectively engaged understandings of how algorithms work, and these depictions are productive renditions of their everyday experiences. As shown in particular by Swart (2021), rather than being merely passive actors, users demonstrate a strong reflexive engagement with algorithmic recommendation systems—albeit often through imperfect accounts, which are difficult for them to verbalise. Interestingly, Swart (2021) shows that this is often activated when these systems do not work properly or fail to provide users with what they actually want.

We propose to understand this as a process of ‘othering’ of algorithmic systems enacted by users of social media platforms. We argue that users, in their construction as digital and data subjects, subjectivise algorithms in return, negotiating their own agency vis-a-vis the perceived qualities and functioning of these affordances. The notion of othering, developed by Spivak (1985) in the context of post-colonial theory, is here employed and adapted to the human–machine relationship in which users and algorithms are engaged in. The process of subject creation and categorisation enacted by the latter upon the former is, we show, reciprocated by the former via a process of social categorisation and identification largely based on stereotyping, which sees ‘the algorithm’ (in the singular) as a somewhat mythical creature. Different dimensions pertain to this process of othering of algorithms by users; these are presented in the next sections.

Methodological note

The empirical materials presented in this article originate from a small-scale research project aimed at expanding the existing understanding of the role of social media and, specifically, algorithmic recommendation systems, in the formation of public opinion in Italy. These consist of 6 focus groups, involving a total of 33 participants, held between April and July 2021 in Milan (Italy), focused specifically on the discussion of algorithmic knowledge and awareness in relation to news recommendation and content circulation on social media. Participants were recruited through a public call distributed via the communication channels (website, newsletter, social media) of a major public institution (a museum) in the city of Milan, that is regularly involved in science dissemination activities. The call gathered a total of 43 respondents, of which 33 confirmed their participation in the research. Overall, 5 focus groups were conducted in person and 1 was held online; individual sessions lasted between 1:30 and 2:10 h. Participants were evenly distributed in terms of gender, with 17 identifying as females and 16 as males. In terms of age, at the time of the study 14 participants belonged to the 18–25 age group; 11 were between 25 and 40 years of age, and 8 were between 41 and 58. Our sample was more highly educated than the average: 1 participant had a primary education diploma, 12 had a secondary education diploma, 18 had a bachelor’s or master’s degree, and 2 had a PhD. Meetings were both audio and video recorded, and the contents subsequently transcribed, anonymised and inductively coded for qualitative thematic analysis (Cyr 2016). Contextual to collective discussions, participants were administered a pre- and post-meeting questionnaire aimed at monitoring changes in their algorithmic awareness, and were requested to autonomously compile a personal diary in the week subsequent to the meeting. Additionally, during the focus groups, one question asked individual participants to sketch a representation of an algorithm, and another one to write a list of maximum three qualities that an algorithm should possess to deliver good quality news. All the sheets used by the participants were collected and included in the analysis.

Focus groups represent an ideal choice for this study, as they offer a path to break the binary relationship between users and their algorithmic other. In focus groups, participants share, compare and discuss their individual experiences with other users: indeed, the collective dimension that develops from discussion and interaction constitutes their unique added value (Cyr 2016). By closely examining the exchanges between participants, focus groups allow users’ ideas and responses to develop and expand, ultimately revealing subtleties and intricacies otherwise hidden (ibidem). Building upon established practices in algorithmic studies (Siles et al. 2020; Diakopoulos and Koliska 2017), the analysis of the empirical material has focused on the social interactions between users, to then draw what these interactions imply in terms of the collective discursive formation of social imaginaries. The interactional level of analysis is well-suited for exploratory studies (Cyr 2016) like the present one. In line with the nature of exploratory research, the empirical findings in this article generate novel interpretations and conceptualisations that are designed to inform future research. It is our hope that other works will build upon and broaden these findings, perhaps through the use of additional empirical material such as in-depth interviews or surveys, in order to strengthen their cogency and generalisation.

Exploring the main dimensions of algorithmic othering

Algorithmic imaginaries and online news circulation

A fundamental component of algorithmic imaginaries is found in speculating on how algorithms should be and behave (Bucher 2017). In the focus groups, we invited respondents to discuss their relationship with algorithmic recommendation systems in relation to news circulation and public opinion formation; respondents were thus asked to write down and discuss the qualities they think algorithms should possess to support the circulation of trustworthy information. The majority of them spontaneously referred to attributes related to journalistic standards and the design of information systems (Bachmann et al. 2022). For them, ‘good’ algorithms should be able to: select reliable news; penalise fake news, click-bait, and trashy content; guarantee users’ exposure to a plurality of sources, without funnelling users into ‘rabbit holes’. Yet, their own lived experiences challenge the realisation of such an ideal imaginary. When invited to reflexively discuss their relationship with algorithmic systems in relation to news consumption, almost immediately our participants turned to their own subjective experience. See for instance the exchange below:

PARTICIPANT 4 [FG1]:Footnote 1 In my opinion, the algorithm functions better on more frivolous, trashy content. I work as a kindergarten teacher; if I look for pedagogical content to improve my work I can’t find anything. If just one time I watch “Uomini e donne” [an Italian, soap-style tv show], I get pestered. […]”

PARTICIPANT 1 [FG1]: Indeed, it never recommends the topics I always search for!

This exchange reveals that the critical engagement of user subjectivities vis-à-vis their everyday digital practices—what we call here ‘othering’—is a key aspect in the production of their algorithmic imaginaries. The unknowable nature of algorithms invites reflexive exploration; yet the process of ‘othering’ becomes evident when users encounter ‘algorithmic glitches’ (Swart 2021) – that is, moments in which algorithms behave differently from their expectations. As the excerpt below shows, users tend to engage in ‘othering’ especially when they describe the ‘friction’ between the ideal qualities they think an algorithm should have, and its commercial logic, that is believed to be detrimental to the circulation of quality information:

PARTICIPANT 7 [FG1]: In the end, the algorithm is an enigmatic entity. We demand it possesses an ethics.

[…]

PARTICIPANT 3 [FG1]: Yes, but if the algorithm were ethical, it would strive to teach us as many things as possible, but would it be so much used?

PARTICIPANT 7 [FG1]: No, it would be counterproductive.

‘Glitches’ are not a new element in critical research on algorithms; Bucher (2018, p. 103) calls them ‘algorithmic mismatches’. As Haider and Sundin (2021, p. 133) underline, these may be seen as ‘frictions’ of relevance, which “lay bare the workings of the automated decisions employed and thus enable a discussion about them in the first place”. Indeed, glitches seem to expose the collision between individual relevance and societal interests. This is reflected in our participants’ accounts, which suggest a fundamental tension between the original, counter-cultural dream of the Internet as a potential Habermasian public sphere (Dahlgren 2005) and the current reality of surveillance capitalism (Zuboff 2019). Some of our participants indeed share the idea that algorithmic personalisation is functional to the economic profits of the tech companies owning platforms:

PARTICIPANT 2 [FG3]: I wrote that the algorithm should be impartial, but then I annotated “Is it really possible?” […] The dilemma is that on one hand [the algorithm] should be designed to foster critical sensibility. On the other, why should I read content which does not interest me? Then there’s also the side of the producers, who use algorithms to have more traffic on their page.

PARTICIPANT 1 [FG3]: I wrote it can’t exist or maybe it already exists, because it gives qualitative answers based on quantitative questions. But ultimately, they are always partial and polarised.

MODERATOR: So, you mean these are appropriate goals, but at the same time…

PARTICIPANT 2 [FG3]: Yes exactly, at the same time I tell myself they would be totally anti-economic. I imagine an algorithm's goal is to achieve economic results, so they have a totally different logic from offering good quality information.

Enmeshed in these accounts is the constant process of ‘othering’ of the algorithm that users enact as a result of their subjective positioning in relation to their algorithmic imaginaries. We explore this aspect in greater detail in the next section.

A typology of algorithmic subjectivities

To account for our participants’ subjective positioning in relation to their algorithmic imaginaries and thus show how they enact the process of ‘othering’ of the algorithm(s), we developed a typology that takes inspiration from the Acceptance Threshold (AT) model, a cognitive concept employed in behavioural studies (see for example Scharf et al. 2020). While it is not to be intended as a clear-cut categorization of human–machine interaction strictu sensu, this can provide a useful heuristic to describe our participants’ positioning in relation to algorithm(s) through their own discourses and narrations. In particular, it shows the different nuances that substantiate users’ positionings vis-a-vis recommendation algorithms and that underpin the processes of othering at the centre of this article (Table 1).

Table 1 Typology of subjective positionings

Based on this approach, we can locate our participants’ subjective positioning according to two main interpretative axes. The first one may be described as an anthropomorphic understanding of the algorithm, whereby users tend to provide it with human-like features to make sense of its non-human nature. The human metaphor is frequently employed to reduce the complexity of interaction with these actors (Airenti 2018); in such cases, user behaviour often revolves around the deception of the machine (Natale 2021). Here, users tend to position themselves subjectively more in harmony with the algorithm, and thus exert less agency. See for instance the participant below:

PARTICIPANT 4 [FG2]: [Describing their drawing of an algorithm] “[...] I drew a hand lens and a mirror. The hand lens since I begin with a very small topic which he magnifies and modifies so as to make me see all the details of what I’ve chosen. And the mirror because the algorithm is my double self. It knows me very well as I am the one who feeds it, with myself, my desires, what I want to comprehend and to know [...]. It tries to give me what I want.”

In this case, the metaphor employed for depicting how an algorithm works is based on the subjectivation of the algorithm as an entity that is somehow able to listen and react to one’s stimuli. This describes not only how individuals generate specific imaginaries when asked to describe algorithms, but also their effort in making a familiar sense of what they are describing—which is expressed by way of counter-subjectivation. Another example taken from the same participant is even more eloquent in this sense:

PARTICIPANT 4 [FG3]: “The algorithm thinks it knows us pretty well. It doesn’t. If I enjoy reading the personal page of Matteo Salvini [...]. When I’m finished, the algorithm - which knows me very well - shows me similar content like a snowball, even though actually… I helped.”

A second interpretative axis is what may be defined as a mechanistic understanding of the algorithm, whereby one’s perception of agency is stronger. In such cases, users tend to see human-algorithm interactions as more antagonistic and, as a consequence, carry out tactics to cope with the perceived impact of algorithmic systems upon their online behaviour. Anthropomorphic and mechanistic accounts must not be seen as rigidly separate but in a continuum, across which participants position themselves in relation to their own subjective engagement. Concerning the latter, users actively ‘play’ with algorithms, highlighting how they see algorithms as subjects that can be controlled:

PARTICIPANT 8 [FG1]: “Why don’t you avoid recommendations on purpose? […] I usually type what I want to watch, what I’m interested in. In 90% of the times, when I watch the screen of my phone… I write what I’m going to see”

PARTICIPANT 3 [FG1]: “On Facebook they [recommendations] appear while you’re scrolling, right?”

PARTICIPANT 8 [FG1]: “Of course, but if I want to see what my cousin is doing…” [start laughing]

Anthropomorphic and mechanistic accounts intertwine with two other interpretative dimensions: apocalyptic and integrated. We take these from Umberto Eco’s well-known division between the intellectuals with an elitist critical stance towards mass culture – the apocalyptic – and those were those naively in favour of it – the integrated (Eco 1964). Here, apocalyptic approaches based on an anthropomorphic account usually stress the necessity to educate users, particularly younger ones, about algorithmic systems. In doing so, they emphasise the role of digital capital (Ragnedda 2018) in human–machine interactions and consequently emphasise awareness and critical thinking as key aspects. See for instance the exchange below:

PARTICIPANT 1 [FG5]: “Perhaps in high-schools there should be more time spent on training people on Internet consumption and – generally speaking – experiencing algorithms more consciously.”

Apocalyptic accounts with a more mechanistic perception instead focus on what Beer (2017) describes as the social power of algorithms. Interestingly, these are characterised by a strive to overcome the power dynamic that is enacted upon them by algorithms, for example by producing different personal profiles. This resembles the tactics illustrated in The Practice of Everyday Life (1984) by de Certeau, which are employed by individuals to regain themselves and their (online) spaces of freedom. One participant, for instance, revealed they engaged in lurking – the act of scrolling without posting anything on social media – as a way to escape from algorithmic surveillance:

PARTICIPANT 6 [FG3]: “[…] I’ve decided to not publish anything anymore on my opinions. I don’t post… I prefer posting less, I prefer liking content the least I can… I prefer seeing without touching. I don’t want to comment as much as I can as I don’t want the algorithm to check what I like because I don’t like that it could overexpose me about what I said and what it wants to promote.”

MODERATOR: “But you are still logged in, right?”

PARTICIPANT 6 [FG3]: “Of course, you can’t leave completely since by now public discourse is on social media… I mean, there’s no distinction between real and virtual. Everything is there.”

Among the integrated approaches, those involving a mechanistic stance often call for an ethical use of automated technologies. Echoing ethical algorithm studies (cf. Mittelstadt et al. 2016), the focus here shifts from technology itself to the behaviour of companies, institutions, and associations which develop or employ algorithms for their aims. Ethical design and purposes are thus among the highest priorities behind users' digital choices, such as news retrieving. See how this participant describes her/his positioning in this regard:

PARTICIPANT 1 [FG4]: [while describing which features make algorithms better technologies] “I faced some issues in finding qualities but I’ve chosen those who have designed it [the algorithm]… To me, what’s relevant is who stays behind the algorithms, whether it is an association, political party or a company. […] so the ethics of who has designed it and the purpose to which it’s employed.”

Finally, those who position themselves as integrated and present an anthropomorphic positioning in relation to algorithms may be seen as rather close to those described by Eco as ‘optimists without theorization’ (1964, p.64). These are individuals who emphasise the opportunities and comforts that arise from their interactions with algorithms, in a rather uncritical manner:

PARTICIPANT 1 [FG5]: “I’ve never tried it [reverse engineering the algorithm, Ed.], but it’d be a pity to lose all the advantages guaranteed by the algorithm. […] It’s ok for me to be recognised by the system since it could propose interesting stuff afterwards. That could limit distractions!”

It must be noted that, while describing their subjective engagement with algorithms, the influence of algorithmic personalisation is perceived at different degrees. This is mostly due to the fact that the processes of algorithmic personalisation derive from categorising users and most often meet their real interests—or the interest of an entire category of users. Hence, this process of subjectivation is also seen in collective imaginaries, which arise when users believe their experience of the algorithm(s) of a certain platform to be the same for everyone in a similar manner. We explore this further in the next section.

Collective imaginaries and algorithmic othering

As said, in studying how users become aware of their own role in shaping their online experience, notions such as ‘algorithmic imaginaries’ (Bucher 2017) or ‘algorithmic self’ (Bishop and Kant 2023) enable us to show how users recognise the role of algorithms in co-producing their experiences of social media use. However, while much attention has been placed on how algorithms are perceived by users to work in relation to their personal use, less emphasis has gone into tracing how algorithms are perceived by the single user to work in general—that is, in a similar way for all users. This is part and parcel of the process of othering here discussed: together with the creation of individual algorithmic subjectivities (Wark 2019; Goriunova 2019) and the related counter-subjectivation of algorithms, our participants also seem to create ‘collective imaginaries’—that is, the conviction that the algorithmic systems of a certain platform actually work in a standardised way for all users, applying not solely to them. These are part of a counter-subjectivation process whereby imaginaries emerge to be as a) somewhat congruent, in that most users share a similar picture of how algorithms work; b) particularistic but generalised, i.e., mostly due to the user’s own (largely unaware) personalisation; and c) surprisingly congruent, regardless of the use they make of the platform.

(A) Somewhat congruent collective imaginaries

Facebook and Instagram are the social media platforms that our participants have discussed the most. These are, indeed, the most widely used platforms across the different age groups of our study. This can explain, to some extent, why many seem to have somewhat congruent imaginaries of how the algorithms of these platforms work for all users, regardless of their own perception of personalisation. For instance, many interviewees perceive that while Facebook has for long been the dominant platform, Instagram is nowadays ever more popular, especially among younger users, so that Facebook is now associated with a more mature audience; this is in fact true, as shown by data coming from the Pew Research Center (2021). This perception unfolds as many participants recount how the purpose of Facebook has changed in a longitudinal perspective. See how this participant describes Facebook and its evolution:

PARTICIPANT 3 [FG4]: Facebook, however, for 14 years now, I have noticed how the use of it has changed. I personally used to post a lot more photos, and for me it was a time for sharing. Now it has become a moment for all the newspaper pages and so I get information through that.

In discussing Facebook, most participants say they do not use it as much as they did in the past, and that its purpose is now mostly linked to news reading; they refer to the home page as a ‘first impact’ with news and a place where to consult national newspapers. Instead, Instagram is used to ‘get in touch with people and see what is going on’ (Participant 4 [5]). In relation to information, it is mostly described as a means to post ‘personal things’ or a place to follow ‘pages that are more playful than informative’ (Participant 3 [1]).

Another element of congruence across users’ imaginaries of Facebook and Instagram’s algorithmic systems is that when these are ‘othered’ as reporters of information, their perceived unreliability and inefficacy can become the benchmark against which a user’s subjectivity unfolds. Indeed, users who take a normative stance before these social networks’ recommendation systems have called Facebook a ‘gutter’, have described its algorithm as ‘monotonous’, and have entirely related the use of the platform to the elderly and their habits. One interviewer referred:

PARTICIPANT 1 [FG1] I don't use Facebook very much because it only shows you the same things over and over again. I opened once the stories of (unintelligible) and from there always only those stories. Basically it's a page called 'Lying in front of blue ticks' and from there only that. Actually, I consider it a failure of the algorithm. If you know I don't use it that much, at least let's vary it a little bit, no?

Yet, normative accounts of the Instagram algorithm can also very much be contaminated by the infrastructural features of the platform; being very graphic, one participant refers that Instagram ‘has no intention of attracting more insight’ (Participant 3 [2]). In addition, many interviewees normatively distance themselves from the Instagram algorithm because it is perceived to prioritise gossip over personal interests. As one participant refers:

PARTICIPANT 7 [FG1]: It's true on Instagram there's the 'For you' section which also has nothing to do with me there are a lot of videos, like a lot of gossip and youtubers that I'm not interested in the slightest, so that really... the algorithm isn't doing its job well.

Even when users prefer Instagram as a primary source of information, their motivation often needs a justification, or extra specification in relation to the sources used. As one participant refers:

PARTICIPANT 5 [FG1]: Me too (I inform myself on Instagram) but my sources are less reliable. These are not, for example, newspapers, they are more like pages created by people.

(B) Somewhat particularistic collective imaginaries

On many occasions, collective imaginaries are shared across respondents, probably pointing to a larger and very general category of users making the same use of the same platforms. However, on other accounts, personalisation may be mistaken for the standard, so that imaginaries are only thought to be shared but are, in truth, particularistic and derive from one user’s personal use. For instance, while there has been a lot of controversy regarding the role that social media recommendation systems may have had in funnelling conspiracy theories or fake news, their influence has been demoted in recent literature (Fletcher and Nielsen 2018). Yet the users who slavishly read about this type of information on the platform still seem to think it as an algorithmic pattern involving many users, whereas it is most likely related to their personal use. Speaking about Facebook, one participant declares:

PARTICIPANT 7 [FG1]: I see that on Facebook and on social networks this thing [conspiracy or no vax theories] is referred to exponentially... And I notice that on the Internet... My neighbour works as a translator and working from home she is often on social networks and has been influenced a lot by these theories.

Another example of users acknowledging to a lesser extent their own agency and the processes of personalisation that stand behind what they judge to be a collective scenario comes with the consumption of local news. Local news – which, we can assume, is a good example of how the algorithm is able to personalise content according to the user’s socio-demographic characteristics, geographic location, and interests – is often the topic of these particularistic, yet generalizable imaginaries. In opposition to the majoritarian view of Facebook as the primary provider for daily, national, and international news, an interviewee says:

PARTICIPANT 2 [FG1]: (Local) news reaches me much earlier than television and the news. The news of my area, since I'm in various Facebook groups... And so you know if something happens maybe to an acquaintance in the same village... Only then you get it in the newspapers! I often too, when something serious happens in the area, I post it and automatically everyone in the area knows the news.

Similarly, one interviewee, in describing how Twitter works, admits:

PARTICIPANT 1 [FG3]: Whereas for Twitter... Which I end up using more than I think. It works like a charm on local news. Like ATM [Milan’s local transport agency] and the municipality of Milan use this channel a lot, both to communicate what's happening, so you've been waiting for the bus for four hours... You go and look on Twitter and maybe there's someone else who has written, and they've replied. They are active I must say.

But then again, Twitter—or any other platform—may not be the principal channel where to get local news from for everybody. In fact, to some, Twitter is the place one consults “only for the memes” (Participant 3 [5]), or where hateful conversations can be found. One interviewer refers: “if I am interested in seeing them [hateful contents] I go on Twitter and search for hashtag topics” (Participant 1 [3]). Hence, these imaginaries, despite being narrated as the general way the algorithms work, seem to be related to the specific use the user makes of them.

(C) Homogenous collective imaginaries

The last category of collective imaginaries relates to a specific pattern that emerged in the focus groups regarding malicious content circulating on private chats. The role of the algorithm here is not as straightforward – we could refer to it as ‘second-order’ algorithmic content, in that this is taken from social media platforms where algorithms are at play, and then reported in chats, usually in the form of a link. Many users admit to resharing information from platforms to chats, as in the following account:

PARTICIPANT 1 [FG4]: So I put a story on Instagram to automatically share on Facebook and then I post it directly on WhatsApp (...) The contacts I have on Instagram are different from the ones I have on Facebook. And they're not the same as the ones I have on WhatsApp, so this way I come out with the same picture on all three …

Yet this second-order circulation of content, which appears to be a mechanical habit in this very last quote, is mostly seen as intentionally malicious. Whilst referring to WhatsApp, one user says:

PARTICIPANT 8 [FG1]: It usually happens in groups, that the single person sends you the fake [news] and there is always someone who opens it. But I don't... A news item that comes to you via chat is to be checked, so... There's not even a doubt anymore, beyond whether it's true or not, I'm going to look somewhere else. Because that's where we are at now.

The tendency of our interviewees to ‘other’ the algorithm is, hence, well reflected in these collective imaginaries; when they evoke these standardised scenarios, users are not concentrating on the role their own experience might play in shaping their imaginary, but are instead looking for the common features of algorithmic recommendation systems that can bring together the actions and subjective perceptions of more users. The feeling that there exists a common ground in algorithms that guides all users’ behaviours seems to be arising from the fact that a) people may be unaware that their online activity is leading them to highly personalised content, and thus are left with a collective imaginary that is exquisitely personal and not applicable to other users; or b) their actions comply – or are deemed to comply – with the purposes of a given platform’s recommendation system, so that their level of personalisation is either truly more congruent, or made to look normatively so, with the generalised picture they come to create.

Conclusion

The article has discussed the relationship between users’ subjective engagements with social media algorithms and their algorithmic imaginaries. In so doing, we have introduced the notion of the ‘algorithmic other’, arguing that users engage in a process of counter-subjectivation of algorithms in response to the individuation process these enact upon them. The othering of social media algorithms, we have shown, represents a by-product of the intense personalisation of their own online experience, and suggests that algorithmic imaginaries should be considered in an intertwined relationship with subjective engagements, with the latter being equally important reflexive heuristics in the aim at circumventing the ‘black box’ (cf. Bonini and Gandini 2020).

This inevitably extends to the broader discussion concerning algorithmic systems on social media platforms as technologies of power in society at large. It may be argued that, while established theories of othering assume a subordination of the subject ‘othered’ and its relegation to an inferior position (cf. Jensen 2011), social media users express a process of counter-subjectivation and othering that ultimately reconfirms their own subordination to the power of algorithmic affordances, and their resignation to be ‘individuated’ by them. In their othering of ‘the algorithm’, social media users seem to experience a form of ‘automated powerlessness’ (Are 2022): they recognise the contrast between expected standards of news provision and platforms’ commercial goals, but they tend to accept the latter as intrinsic to the system. The digital nihilism (Lovink 2019) emerging from this combination of attitudes toward the algorithmic other is reinforced by the very same logic of the platform, leading to an experience that is akin to the notion of ‘surveillance realism’ (Dencik and Cable 2017). The pervasiveness of the algorithm(s) in recommending content fuels a sense of inevitability and resignation (Draper and Turow 2019) to the way algorithms work, no matter how much users recognise that – in the words of a participant – ‘the dangerous aspect is when newspapers try to imitate companies to make content more entertaining’ (PARTICIPANT 3 [2]). The result is that users’ subjectivities become both productive for, and subjected to, the digital platform (Foucault 1996, p. 26). In the context of platform work, Walker et al. (2021) have observed how the lack of resistance towards platforms by gig workers can be explained using the Foucauldian concept of biopower: the platform undermines collective actions and forms of active resistance to algorithmic power by isolating gig workers and diffusing a pervasive sensation of control that makes the idea of algorithmic power inevitable. The same mechanism of biopower—even while acknowledging the differences between gig workers and social media users—is observable in our case. Users’ subjectivities relate with algorithms in ways that highlight the personal dimension of this relationship, somewhat hiding the collective dynamics at play—which, in any case, are present, as illustrated.

However, these dynamics of resignation and ‘surveillance realism’ are located by our participants in a discussion where the ‘usefulness’ of algorithms and their role is actually the subtext of many of their narrations. Their resignation still comes in a trade-off with the utility derived from their platform use and, one might argue, from the enjoyment that users derive from platform activity. There seems to be an analogy, in other words, between this process of counter-subjectivation here described and the Lacanian process of othering that Bandinelli and Bandinelli (2021) outline in relation to dating apps, as both seem to be deeply entrenched with this notion of enjoyment. Like on dating apps, the users’ experience of recommendation algorithms on social media is one “whereby the notions of desire and enjoyment can be mobilised to build a bridge between the dimension of individual experience and the discursive and libidinal functioning of social and political apparatuses” (Bandinelli and Bandinelli 2021, p. 187). This is an important insight that we expect future critical research on social media algorithms, digital societies, and information circulation, to take up and directly address.