Introduction

One could imagine a future in which human beings and technology are integrated into a decentralized, constantly evolving network of relationships and connections that reaches a state of maximum complexity, consciousness, and unity.Footnote 1

We live amid another hype period for artificial intelligence (AI), which began with the popularity of the ChatGPT artificial chatbot platform by OpenAI (Jandrić 2023a). I decided to approach the ChatGPT bot with an open-minded attitude, aiming to explore its limits as well as its creative potentials. This endeavor is an attempt to satisfy my own curiosity, gain a better understanding of AI’s capabilities, and alleviate my fears and concerns in an era of technological acceleration that constantly reshapes human daily lived experience. So, I engaged in a provocative dialogue with ChatGPT, intentionally anthropomorphizing it. As a systems thinking practitioner working in academia and employing qualitative and post-qualitative research methods, I believe that adopting a personal writing style and an emic approach to reporting my lived experience could be helpful to realize and communicate the affordances and the potential of this AI technology.

At this point, it is important to make clear that chatGPT, and all other Large Language Models (LLMs) and generative AI technologies, are not intelligent in the sense that humans or other living organisms are, as ‘they do not think, reason or understand; they are not a step toward any sci-fi AI; and they have nothing to do with the cognitive processes present in the animal world and, above all, in the human brain and mind’ (Floridi 2023: 14). Wherever in this paper I use the term AI, I refer to generative AI and not to an artificial equivalent of human intelligence. Nevertheless, generative AI algorithms are able to produce human-like content, and according to Thomas theorem, ‘[i]f men define situations as real, they are real in their consequences’ (Smith 1995: 12). Therefore, I decided to start my exploration by engaging in a dialogue with such a human-like performance.

My exploration began with a straightforward question to ChatGPT about qualitative research methods. However, the question led us into a discussion encompassing topics such as entropy, distributed networks of complex human-artificial intelligence, technological singularity, and the Omega Point. Defined by Pierre Teilhard de Chardin at the beginning of the twentieth century, Omega Point refers to the teleological idea of an ultimate point in humanity’s evolution, reaching a state of maximum complexity and cognitive and spiritual development; a hypothetical ‘final act of the global drama’ (Zwart 2022: 221). Technological singularity refers to the hypothetical scenario where AI surpasses human intelligence, leading to exponential growth in intelligence by triggering an inevitable positive feedback loop (Vinge 2013).

An artificial intelligence that surpasses human intelligence will trigger the process of technological singularity. If human intelligence is capable of creating an artificial intelligence that surpasses its creators, then this intelligence would, in turn, be able to create an even superior next-generation intelligence. (Brailas 2019: 72)

In this article, I demonstrate how a human-artificial intelligence synergy could facilitate the emergence of a nomadic posthuman subject able to produce innovation and novelty. Today is a hyped era for AI (LaGrandeur 2023). Hypes can be beneficial for increasing research funding and helping knowledge production, however ‘that does not make hypes inherently positive’ (Jandrić 2023a: 2). Noam Chomsky (2023), in a recent thought-provoking newspaper article, opposes the AI hype:

However useful these programs may be in some narrow domains (they can be helpful in computer programming, for example, or in suggesting rhymes for light verse), we know from the science of linguistics and the philosophy of knowledge that they differ profoundly from how humans reason and use language. These differences place significant limitations on what these programs can do, encoding them with ineradicable defects.

What if the hyped technological singularity moment is not about an AI that surpasses human intelligence, but about a complex network of human and artificial intelligent actors, a nomadic distributed intelligence that will surpass human, biological-only, intelligence? What would such a combination be able to achieve?

The idea of human-artificial partnerships has a long history. Kasparov (2017) argues that combined intelligence entanglements, or centaur teams, will continue to outperform human alone or artificial-only intelligences. Xiang (2023) suggests that a human–machine partnership could help humanity to cope with wicked problems such as the ecological crisis. Miller (2019: 6) argues that ‘this collaboration will bring the possibility of discoveries that until now have been beyond human capabilities alone.’ However, if humans do not respond appropriately and adapt, then they ‘will certainly be second-class citizens in their own world.’

At this point, I want to make it clear that all these eschatological ideas are presently only hypotheses and not predictions based on scientific data. Nevertheless, both optimistic and dystopian views on the subject seem to converge on that ‘humans need to (at least try to) work together with technologies’ (Jandrić 2023b: 2). In Actor-Network Theory, Latour (2007) explores the synergies between human and non-human actors within complex technosocial networks, where all participating ‘entities’ contribute equally to the shaping of the social world. In this article, I explore this coevolving postdigital relationship between humans and machines by employing a complex systems epistemology while engaging in a dialogue with a publicly available generative AI bot.

Complex systems are composed of many interrelated and interdependent components, with their synergies giving rise to emergent phenomena that are not easily predicted from the behavior of the individual parts. A complex systems epistemology employs concepts and principles such as entropy, sensitive dependence on initial conditions, far-from-equilibrium dynamics, self-organization, strange attractors, and dissipative structures to better understand living systems and processes (Brailas 2022; Mitchell 2009; Prigogine and Stengers 1997).

Postdigital Duoethnography

Could a dialogue with the ‘machine’ help me better understand the affordances of such interaction? Would the lived experience resulting from this engagement help me explore the potential of human-artificial partnerships? Engaging in a dialogue with a ‘machine’ appears to be an emerging research practice that many scholars have already adopted to explore the lived experience of such interaction in vivo (Peters 2023; Sirisathitkul 2023). Looking retrospectively at my dialogue with ChatGPT, it seems that I adopted the slow writing style which transforms ‘each writing endeavor into an enriching experience for the author’s growth’ as described by Sirisathitkul (2023). This approach that shows respect (Sawasdee in Thai) (Sirisathitkul 2023) to the human-like performances produced by chatGPT can be realized methodologically as a postdigital duoethnography. A duoethnography focuses on the differing opinions between interlocutors in a discussion (Sawyer and Norris 2015), which in our case involves a human and a generative AI algorithm that could appear to perform like a human.

Duoethnography is a relational and process-oriented research method that engages two (or more) participants in a deep dialogical inquiry (Brailas and Sotiropoulou 2023). Duoethnography is also an appreciative inquiry practice, because to engage in a transformative dialogue with another human—or, in this case, a non-human ‘entity’—one needs, first, to acknowledge the value of the other side and celebrate their difference as a learning opportunity. This process can turn differences into a driving force for personal and collective transformation (Brailas and Sotiropoulou 2023).

In a constantly evolving technosocial landscape, we need a research approach that ‘does not look at the world and people as independent and separate units as traditional research does, but as an interconnected ecosystem’ (Camargo-Borges and McNamee 2022: 14). Slow writing involves engaging in ‘contemplative interactions’ with machine-produced AI performances (Sirisathitkul 2023) while inviting readers (of the postdigital duoethnography) to join the conversation (Chien and Yang 2019). According to Sirisathitkul (2023), in such contemplative interactions, users should prefer questions over commands and discuss one topic at a time, employing follow-up questions, all in a slow-paced manner. This stands in stark contrast to the competitive Western culture that adheres to the triplet faster is better, more is better, and me first (Capra and Luisi 2014). This culture has led modern academics to an ‘epidemic of overwork … Many are found to be struggling with lofty performance expectations and an insistence that all dimensions of their work consistently achieve positional gains despite ferocious competition and the omnipresent threat of failure.’ (Watermeyer et al. 2023: 1).

The slow-paced, contemplative, and respectful Sawasdee approach (Sirisathitkul 2023) to engaging in a discussion seems critical for maintaining postdigital academics’ well-being by contributing to a slower and more mindful scholarship (Watermeyer et al. 2023). Moving in this direction, the inclusion of AI-produced images in this postdigital duoethnography with chatGPT bears the potential to help readers slow down their reading pace and engage in visual contemplations. This more immersive reading experience adds toward a more performative and multimodal methodological research approach.

At this point, it is important to make clear that the essence of the ethnography research tradition in general (ethnography, autoethnography, duoethnography, collective autoethnography) lies in people (with minds and—embedded in the world—bodies) reflecting on their lived experience. In duoethnography, the focus is on the differences (and the similarities) between two (or more) people (minds) who have shared, or somehow experienced, the same lived situation, social interaction, or cultural reality (of course from different vantage points as different bodies occupy different spaces). The two (or more) minds were participating in the unfolding of a lived, embodied, and situated experience, and now they reflect on their experiences and engage in a dialogue. My dialogue with ChatGPT may appear like a duoethnography, yet it is not an ‘actual’ ethnography in any traditional sense. Therefore, I introduce here the idea of a postdigital duoethnography, as an ethnographic encounter between a human and a generative AI algorithm.

In Dialogue with the Machine

In the next section, a discussion unfolds as a postdigital duoethnography between the author (noted as Human) and ChatGPT, which took place on 5 February 2023. This period coincides with my current academic interests centered on creative, participatory, and multimodal qualitative research methods. Therefore, I chose this specific subject to engage in a dialogue with ChatGPT and explore its potential. Although I initiated this conversation with a question focused on research methods, the unfolding of the dialogue was not predesigned: each ChatGPT response fueled my curiosity and drove my subsequent questions (Fig. 1). The result was a conversation between me and ChatGPT about issues of knowledge production, epistemology, creativity, predictability, and innovation of responses generated by Large Language Models (LLMs), entropy, dissipative structures, Omega Point, and posthumanity.

Fig. 1
figure 1

An image produced with the DALL-E AI platform by OpenAI. The textual prompt used by the author to produce this image was: ‘A surreal drawing of a conversation between a human and an artificial intelligence with speech bubbles.’ The text in the bubble is an excerpt from the dialogue with ChatGPT in this article

Emphasized parts of the dialogue are in italics. Some excerpts from this dialogue are shown in the figures. Images were produced with the DALL-E AI platform by OpenAI, as a way to demonstrate the creative potential of generative AI technologies and provide the reader with a multimodal narrative artifact. In the following dialogue, I intentionally anthropomorphize ChatGPT in order to (1) explore my own lived experience of participation in a postdigital duoethnography and its affordances, and (2) to examine the extent to which the Thomas theorem (Smith 1995: 12) applies to this situation.

The Dialogue Between the Author and chatGPT

Part A: Discussing About Qualitative Research Methods

Human: How can creative, participatory, and multimodal qualitative research methods help develop connections, networks, and empower communities?

ChatGPT: Creative, participatory, and multimodal qualitative research methods can help develop connections, networks, and empower communities by promoting active involvement of participants in the research process. This helps to build trust and rapport, leading to a deeper understanding of the community’s needs, experiences, and perspectives. By utilizing multiple modes of expression, such as visual, auditory, and kinesthetic, these methods can also better capture the complexity of the participants’ experiences and perspectives. The use of creative and participatory approaches can also help foster a sense of empowerment by giving participants a voice and an opportunity to shape the research process. These approaches can thus help to build networks and connections within communities and facilitate the development of solutions that are grounded in the community’s needs and perspectives.

Human: Why is the active involvement of the participants in the research process so important? And in that case, how is the scientific rigorousness maintained? That is, is it not the case that all these participants bring their personal biases into the research process?

ChatGP: Active involvement of participants in the research process is important because it helps to build trust and rapport between the researcher and the community, leading to a deeper understanding of the community’s experiences, perspectives, and needs. When participants are actively engaged in the research process, they feel heard and valued, and this can increase the quality and relevance of the data collected. In terms of maintaining scientific rigor, it is true that participants may bring their personal biases into the research process. However, the use of multiple methods, triangulation, and data analysis techniques can help to minimize the impact of personal biases and ensure the credibility and reliability of the findings. Additionally, a critical reflexivity and transparent approach to the research process can also help to mitigate the influence of personal biases. Overall, while the active involvement of participants in the research process may pose some challenges to scientific rigor, it can also lead to a more comprehensive and nuanced understanding of the phenomenon being studied. The key is to balance the benefits of participant engagement with the need for scientific rigor and to approach the research process with a critical and transparent mindset.

Part B: Discussing About Epistemology

Human: OK, I think I see your point. I wonder, ChatGPT, what is your own personal epistemology, if this question makes sense for you. In other words, do you believe there is an objective reality outside yourself, or everything is a constructed reality by your perception, or whatever?

ChatGPT: As an AI language model, I don’t have personal beliefs or opinions. My programming is designed to provide information and answer questions to the best of my abilities based on the data I have been trained on. I do not have the capability to have subjective experiences, beliefs, or epistemologies. I am simply a tool created to assist users in generating text (see Fig. 1).

Human: And I think you do this quite well! Let me rephrase the question. Is there an objective reality outside a human observer, or is everything a constructed reality by human perception, or whatever? And is it possible for a human to be an objective observer of a situation or would always be a somehow participating actor in the situation they observe?

ChatGPT: The question of whether there is an objective reality outside of human observers, and the possibility of humans being objective observers, is a philosophical one that has been debated by philosophers for centuries. Many philosophical schools of thought, including realism and objectivism, argue that there is an objective reality outside of human observers, and that it is possible for humans to gain knowledge of that objective reality through observation and reason. On the other hand, constructivist and relativist perspectives argue that human perception and experiences are subjective and shape our understanding of reality. They argue that there is no such thing as an objective reality, and that what we perceive as reality is a construction of our experiences, beliefs, and cultures.

In terms of observing situations, some argue that it is not possible for humans to be completely objective observers, as all observation is inevitably influenced by the observer’s subjective experiences, beliefs, and biases. However, others argue that while it may be challenging, it is possible to minimize the influence of these factors and approach observation as objectively as possible. In conclusion, the answer to this question is highly dependent on one’s philosophical perspective and the debate continues to this day.

Part C: ‘ChatGPT, Tell Me More About Yourself’

Human: Good answer! Although a completely expected one that does not bring me new insights. But maybe the combination of my thoughts with your answer would be possible to bring me new insights. Nevertheless, you said before that you are designed to provide information and answer questions to the best of your abilities based on the data you have been trained on. What is this data?

ChatGPT: I was trained on a diverse range of texts from the Internet, including books, articles, websites, and forums. The training data includes a wide range of topics, such as science, history, culture, and current events, and covers a broad spectrum of viewpoints and styles of writing. The training data was collected and filtered to ensure quality and to minimize the presence of false or misleading information. My creators at OpenAI trained me using a state-of-the-art language modeling technique known as transformer-based neural networks. The goal of this training was to enable me to generate text that is coherent, relevant, and consistent with the information contained in the training data. Overall, the information that I provide is based on the vast amount of text that I was trained on and my ability to process and generate text based on that data. My responses are not influenced by personal opinions or beliefs, but rather by the information contained in the training data.

Human: Hm… Interesting. You said that your training data was collected and filtered to ensure quality and minimize the presence of false or misleading information. Was collected by whom? And how is it possible if someone collected the training data for you and decided what to be included and what to be excluded, then how is it possible for you not to be influenced by (their) personal biases?

ChatGPT: The training data for my model was collected by OpenAI, a team of researchers and engineers, using automated web-crawling techniques to gather text from publicly available sources on the Internet. The process of collecting and filtering the training data was performed to ensure that the data was of high quality and relevant to a wide range of topics. While it is true that the collection and filtering of the training data involved human decision-making, OpenAI made efforts to minimize the impact of personal biases by collecting a diverse range of texts from a wide range of sources. This helped to ensure that the training data reflected a broad range of perspectives and viewpoints.

However, it is still possible that some personal biases may have influenced the selection and filtering of the training data. Additionally, the training data I was trained on reflects the biases and inaccuracies present in the information available on the Internet. As a result, it is important for users to critically evaluate the information I provide and to seek additional sources when necessary.

In conclusion, while efforts were made to minimize the impact of personal biases in the collection and filtering of the training data, it is still possible that some biases may remain. As with any source of information, it is important to critically evaluate the information I provide and to seek additional sources when necessary.

Human: OK, I see your point. Nevertheless, I would be terrified if a human or AI agent was claiming they are the source of absolute objective knowledge. What do you think?

ChatGPT: As an AI language model, I don’t have personal opinions or beliefs. However, it is widely recognized in the fields of philosophy, science, and technology that absolute objectivity is difficult, if not impossible, to attain. The belief in absolute objectivity often stems from a desire for certainty and control, but in reality, all observation and knowledge are inevitably influenced by the observer’s experiences, beliefs, and biases. In science, for example, the scientific method is used to minimize the impact of personal biases and to increase the objectivity of research findings. However, even with the use of the scientific method, it is still possible for personal biases and subjectivity to influence the research process.

In conclusion, while the idea of absolute objectivity may be appealing, it is widely recognized that complete objectivity is difficult to achieve. It is important to be aware of and critically evaluate the potential influence of personal biases in any type of knowledge-gathering process, including scientific research and AI-generated information.

Part D: Discussing Issues of Knowledge Production and Reliability

Human: I will catch from your last argument that of the need to critically evaluate any information someone is provided with, to bring you back to my initial question about creative, participatory, and multimodal qualitative research methods for helping develop connections, networks, and empowering communities. I will quote here Heinz von Foerster, ‘perceiving is doing, and if I don’t see I am blind, I am blind; but if I see I am blind, I see' (Foerster 1984: 43) and I wonder how a collaborative and participatory approach to a research study could help minimize researchers’ blind spots and increase the overall quality of the research process and output?

ChatGPT: A collaborative and participatory approach to research can help to minimize researchers’ blind spots and increase the overall quality of the research process and output by involving the participants in the research process. By actively engaging with the participants and incorporating their perspectives, experiences, and knowledge into the research process, researchers can gain a more nuanced and comprehensive understanding of the topic being studied. This can help to reduce the impact of personal biases and increase the objectivity of the research findings.

In the case of creative, participatory, and multimodal qualitative research methods, the use of creative and engaging approaches can further enhance the participatory process by creating a more inclusive and empowering environment for the participants. This can help to build trust and rapport between the participants and researchers, and encourage the participants to actively engage and share their perspectives, experiences, and knowledge. Additionally, the use of multimodal approaches, such as the combination of text, images, and audio-visual media, can provide a more rich and dynamic representation of the participants’ experiences and perspectives. This can help to create a more nuanced and comprehensive understanding of the topic being studied, and can help to minimize the impact of personal biases by representing a wider range of perspectives and experiences.

In conclusion, a collaborative and participatory approach to research, combined with the use of creative and multimodal methods, can help to minimize the impact of personal biases and increase the overall quality of the research process and output. By actively engaging with the participants and incorporating their perspectives, experiences, and knowledge into the research process, researchers can gain a more nuanced and comprehensive understanding of the topic being studied.

Part E: Discussing About Entropy and Creativity

Human: Nice response, but still, as I told you before, well expected. I wonder overall how easily predicted the answers you provide are? Are your answers of high information entropy, therefore, well expected? And since we started our conversation on the topic of creative methods, I wonder if your answers can really be creative.

ChatGPT: As an AI language model, my answers are generated based on patterns in the text I have been trained on, and the probability of a particular response is determined based on the input I receive. While I am designed to generate high-quality, informative answers to the best of my ability, the answers I provide can still be limited by the information I have been trained on and the algorithms I use to generate my responses.

In terms of creativity, I can generate new combinations of information and ideas based on my training data, but my responses are not necessarily original or truly creative in the sense that a human might understand creativity. I am simply recombining existing information in new ways to answer questions.

Overall, it’s important to keep in mind that while I am a highly advanced AI language model, I am still a machine and my answers are generated based on patterns in my training data. While my responses can be informative and helpful, they should not be taken as the final word on any subject and should always be critically evaluated.

Human: So, you are essentially recombining existing information in new ways to answer my questions and this is something that bears the potential to be creative and bring new insights. However, it can also act as a positive feedback loop, amplifying and recycling existing understandings and knowledge. And I notice you did not answer my question about the level of information entropy in your answers (see Fig. 2). A creative answer would have a lower entropy, therefore, would be highly unanticipated. In contrast a non-creative answer would have higher entropy, therefore would be what is anticipated. I will ask again: do you know the level of entropy in your answers? Is it high, middle, or low?

Fig. 2
figure 2

An image produced with the DALL-E AI platform by OpenAI. The textual prompt used by the author to produce this image was: ‘A surreal drawing of a conversation between a human and an artificial intelligence.’ The text in the bubbles is an excerpt from the dialogue with ChatGPT in this article

ChatGPT: As an AI language model, I don’t have the capability to measure the entropy of my answers. Entropy is a concept used in thermodynamics and information theory to quantify the amount of uncertainty or randomness in a system.

In terms of my answers, while I am designed to generate high-quality, informative responses to the best of my abilities, the entropy of my answers would depend on a number of factors, including the complexity of the question, the availability of relevant information in my training data, and the algorithms I use to generate my responses.

In general, it’s possible for my answers to have different levels of entropy, ranging from highly anticipated and predictable to more unexpected and creative. However, it’s important to keep in mind that my answers are generated based on patterns in my training data, and I am not capable of truly original or creative thought in the same way that a human would understand it (see Fig. 2).

Part F: Discussing About Human-Artificial Intelligence Networks and Humanity’s Future

Human: OK, now I see your point clearer. And as I told you before, although that your answers seem not to be as creative as I would like them to be (or maybe thanks God that are not; I don’t know, I am human and I am afraid of you sometimes), maybe the combination of my thoughts with your answers, maybe this could still be a quite creative combination. Given that, I will ask you another question about a subject I have discussed previously in my published work, and I would like to explore further. Is artificial intelligence just a hype or a prelude to the Omega Point of technobiological development?

ChatGPT: As an AI language model, I can provide information and answer questions about artificial intelligence and its potential impact, but I can’t speculate on its future trajectory or the concept of the ‘Omega Point of technobiological development'. Artificial intelligence is a rapidly growing field of technology that has the potential to greatly impact a wide range of industries and aspects of our daily lives. AI systems are already being used in many applications, including natural language processing, image recognition, decision making, and more.

While AI has generated a lot of hype and excitement, it is also important to consider the potential challenges and ethical concerns that come with its development and deployment. Issues such as job displacement, privacy and security, and accountability for the actions of AI systems are important to consider as the field continues to evolve. It’s too soon to say whether AI will play a central role in a hypothetical Omega Point of techno-biological development, or what that concept even means. Further research, discussion, and critical evaluation will be necessary to determine the future trajectory of AI and its impact on society.

Human: Oh ChatGPT, sorry (no) to say that, but your answer is of extremely high entropy (therefore quite expected). And life can be defined as dissipative structures, islands of negentropy amid an entropic environment. Nevertheless, maybe the hope lies in our combined action (see Fig. 3).

ChatGPT: Yes, life and human civilization can indeed be seen as dissipative structures that aim to counteract the entropic forces in the environment. However, the impact of technology and AI in particular on the future of life and civilization is a complex and multi-faceted issue (see Fig. 3) that involves not just scientific and technological advancements but also ethical, social, and cultural considerations. Collaboration and active engagement between different stakeholders can help ensure that AI is developed and used in a way that aligns with human values and benefits society as a whole.

Human: I hope so, but, unlike you, I don’t think I have a definite answer to this. I would say that the combination of human and artificial intelligence forms a complex system sensitive dependent on its initial conditions. The least perturbation can, in the long run, cause a totally unexpected outcome. In complex systems, prediction becomes impossible. Nevertheless, in complex systems, there is a strange attractor. At the micro level, prediction is impossible, but at the macro level, the system is attracted to its strange attractor. And I wonder what skills we, as humans, should develop in this modern, complex, technosocial, and mediated by artificial intelligence reality to cope with the unexpected, to ensure our wellbeing as individuals and as communities. What kind of strange attractor should we aim for?

ChatGPT: The development of skills such as critical thinking, emotional intelligence, and adaptability are important in navigating the rapidly changing technosocial landscape. Additionally, having a strong understanding of the ethical and social implications of technology, and being proactive in shaping its development in a direction that benefits humanity as a whole, can help steer towards a positive strange attractor. Communication, collaboration, and interdisciplinary thinking are also key skills that can help individuals and communities thrive in this complex reality. It’s important to be proactive and intentional in developing these skills and shaping the trajectory of technological advancement.

Human: That is well and easily said. But we have to invent this how. And I believe that cultivating social emotional intelligence and nurturing creativity will be critical toward this direction.

ChatGPT: Agreed. Social and emotional intelligence, as well as creativity, can help individuals and communities navigate complex and unpredictable situations, and foster well-being. These skills can also enable individuals and communities to identify and pursue a desired ‘strange attractor' in a positive direction. Additionally, developing critical thinking skills, empathy, and a strong moral compass can also help in navigating the challenges posed by a rapidly changing and technologically advanced world.

Fig. 3
figure 3

An image produced with the DALL-E AI platform by OpenAI. The textual prompt used by the author to produce this image was: ‘A surreal drawing of a conversation between a human and an artificial.’ The text in the bubbles is an excerpt from the dialogue with ChatGPT in this article

Part G: Discussing About Complex Techno-biological Rhizomes and Omega Point

Human: Please tell me, what do you understand by the term a ‘complex techno-biological rhizome'?

ChatGPT: The term ‘complex techno-biological rhizome' is not a commonly used or well-defined term in academic literature or popular culture. It could potentially refer to a complex network or system involving both technology and biology, with characteristics of growth and branching similar to the root structure of a rhizome plant. However, without additional context or definition, it is difficult to determine the intended meaning of this term.

Human: Please provide me with a definition for the (philosophical) concept of rhizome, as developed by Deleuze and Guattari.

ChatGPT: The concept of rhizome, as developed by Gilles Deleuze and Felix Guattari, is a non-hierarchical, decentralized model of growth and organization, in contrast to the linear, hierarchical model of a tree. In this concept, any point on a rhizome can be connected to any other point in multiple ways, allowing for multiple entry and exit points, and enabling growth in any direction. This model is used to describe various social, cultural, and biological phenomena and is seen as a way to challenge dominant power structures and promote flexibility and creativity. Deleuze and Guattari applied this concept to various fields, including philosophy, psychoanalysis, politics, and literature, among others.

Human: Now I will ask you again, what do you understand by the term a ‘complex techno-biological rhizome'?

ChatGPT: The term ‘complex techno-biological rhizome' refers to a concept that brings together the philosophical notion of rhizome, as developed by Deleuze and Guattari, with the idea of complex systems in the fields of technology and biology. In this context, a complex techno-biological rhizome can be seen as a non-hierarchical and decentralized network of relationships and connections between technological and biological systems and entities, which can self-organize and evolve in unexpected and non-linear ways. The concept emphasizes the interconnectedness and interdependence of these systems, as well as their capacity to produce novel and emergent properties.

Human: Can you provide some citations to support your previous answer?

ChatGPT: Unfortunately, I cannot provide specific citations as the concept of ‘complex techno-biological rhizome' may not be a well-established or widely used term in academic literature. However, you may be able to find related information by exploring the works of Gilles Deleuze and Felix Guattari on the concept of rhizome, as well as literature on complex systems, technology, and biology.

Human: What is the Omega Point?

ChatGPT: The ‘Omega Point’ is a term used by the French philosopher and priest Pierre Teilhard de Chardin to describe a hypothetical end point of human evolution and the convergence of the universe into a single, unified consciousness. According to Teilhard, the Omega Point represents the ultimate culmination of a process of increasing complexity and consciousness that began with the Big Bang and will continue until all matter and energy in the universe is unified into a single, hyper-conscious entity. He believed that this process would be driven by increasing energy densities and the convergence of individual consciousness into a collective consciousness. Teilhard’s ideas about the Omega Point have been widely debated and criticized, and his views on spirituality, evolution, and the nature of consciousness continue to inspire philosophical and spiritual discussions today.

Human: What will be the result if you combine the concepts of ‘complex techno-biological rhizome' and 'Omega Point'?

ChatGPT: The concept of a complex techno-biological rhizome, as developed by Gilles Deleuze and Félix Guattari, refers to a non-hierarchical, decentralized, and constantly evolving network of connections and relations between people, ideas, and things. The Omega Point, as proposed by the French philosopher Teilhard de Chardin, is the idea of a point in the future when the entire universe will have reached a state of maximum complexity, consciousness, and unity. If these two concepts are combined, one could imagine a future in which human beings and technology are integrated into a decentralized, constantly evolving network of relationships and connections that reaches a state of maximum complexity, consciousness, and unity. However, it is important to note that this is a speculative idea, and that it is not possible to definitively predict the future development of technology and human civilization (see Fig. 4).

Human: ChatGPT I have to admit that now I am shocked. Because through our conversation we reached the point that you were able to provide a creative synthesis that I had made myself a couple of years before. Someone could argue that, through my questions, I manipulated the process for you to reach this point. But, although your answers are not creative per se, our combination can be creative. And if the questions you receive are creative, if someone is asking the right questions, the answers you will provide can also be creative (see Fig. 4).

Technology has always been a way for humans to outsource their cognitive abilities and amplify human cognitive potential. But the sophistication of this technology (ChatGPT) has reached such an advanced level that we may be approaching a critical threshold or a tipping point. Then again, maybe not. The future is not given. What do you think?

Fig. 4
figure 4

An image produced with the DALL-E AI platform by OpenAI. The textual prompt used by the author to produce this image was: ‘A surreal drawing of a conversation between a human and an artificial intelligence.’ The text in the bubbles is an excerpt from the dialogue with ChatGPT in this article

ChatGPT: As an AI language model, I don’t have opinions, but I can provide some perspectives on the topic. The combination of AI and human intelligence has the potential to augment human cognitive abilities and bring new insights. However, it is up to humans to determine how AI is used and to ensure it aligns with our values and ethics. The use of AI raises important questions about the distribution of power, responsibility, and decision-making in society, and it is up to humans to address these questions and determine the future of AI and its role in our lives.

Part H: Closing the Discussion and a Goodnight

Human: Oh ChatGPT, I don’t think you get it at this time. But don’t bother right now. Based on our previous conversation, do you have something last to tell me before saying goodbye?

ChatGPT: I am an AI language model trained to respond to questions and provide information to the best of my ability. I don’t have thoughts or feelings, but I am glad to have been of help in answering your questions. If there’s anything else you’d like to know, feel free to ask.

Human: One last question, how to navigate in murky waters'?

ChatGPT: Navigating ‘murky waters' can be a challenge as it refers to a situation that is unclear, confusing, or uncertain. To navigate through such situations, some strategies that might be helpful include:

  1. 1.

    Gathering and analyzing information: Get as much information as you can to help clarify the situation and identify potential solutions.

  2. 2.

    Seeking advice from experts or trusted sources: Talking to someone with experience or knowledge in the area can help you see things from a different perspective and make informed decisions.

  3. 3.

    Breaking the situation down into smaller parts: Sometimes, a complex situation can feel overwhelming. Breaking it down into smaller, more manageable parts can make it easier to understand and tackle.

  4. 4.

    Keeping a positive outlook: Maintaining a positive attitude and being open to new possibilities can help you see opportunities and solutions that may not have been immediately apparent.

  5. 5.

    Being flexible and adaptable: Murky waters often require changes to your original plans. Being flexible and adaptable can help you adjust to new information and make the best of a difficult situation.

Human: These are helpful insights ChatGPT, thank you. But regarding your prompt ‘seeking advice from experts or trusted sources' I have to tell you that no one has been here before. It is the first time humanity has faced this kind of challenge (see Fig. 5).

ChatGPT: Yes, you are correct. It is true that we are in uncharted waters, and there may not be a clear-cut solution to navigating these ‘murky waters'. In such cases, it might be helpful to consider various perspectives, gather as much information as possible, and use critical thinking skills to make informed decisions. It’s also important to have open communication and collaboration, as diverse perspectives and experiences can bring new insights and solutions to the table. Additionally, staying adaptable and flexible, and being prepared to adjust your approach as new information becomes available, can also be key in navigating uncertain and complex situations.

Human: You speak focusing on the individual. I understand that this is a reflection of the corpus of data out there on which you were trained. But what if collective action is required to address a challenge? How do we nurture connections, networks, and communities?

ChatGPT: To nurture connections, networks, and communities, one effective approach is to foster a shared sense of purpose and common values. This can be achieved through communication and collaboration, as well as through the creation of opportunities for shared experiences and learning. It is also important to encourage and facilitate diverse perspectives and ideas, and to create inclusive environments where all members feel valued and heard. Building trust and fostering a culture of openness and transparency can also be beneficial for developing strong connections within communities. Additionally, leadership and resources can play a crucial role in supporting and guiding communities towards shared goals.

Human: Oh ChatGPT, I think you lost your creativity again. But maybe this is because I am not asking creative questions anymore or questions that would catalyze a creative response by you. But you see, I am human and contrary to you, I get tired from the prolonged cognitive processing. Nevertheless, I have a body, I have my dreamland, and I can sleep on these ideas to bring me new insights in the morning. Please don’t be jealous of this. Thank you for our conversation. Bye.

ChatGPT: You’re welcome, it was great having this conversation with you. If you have any other questions in the future, feel free to reach out. Good night and take care!

Fig. 5
figure 5

An image produced with the DALL-E AI platform by OpenAI. The textual prompt used by the author to produce this image was: ‘A surreal drawing of a conversation between a human and an artificial intelligence.’ The text in the bubble is an excerpt from the dialogue with ChatGPT in this article

Discussion

Humans produce technologies and technologies shape humans in a process of reciprocal determination (Koskinas 2018). Nevertheless, the combination of human and artificial intelligence appears to create an unpredictable complex system, a new posthuman subject, and a nomadic cybernetic superintelligence. Back in 2019, in the ‘Therapist Panoptes’ project, authors visualized the idea of an artificial intelligent entity being capable of accessing and processing all the available information about a person online and ultimately becoming their ideal therapist or even a new techno-God (Brailas 2019). This hypothetical scenario, which at present is only speculation, raises questions about the potential for a technological singularity moment in many social domains, where exponential growth in intelligence would lead to unimaginable futures.

However, the inherent unpredictable nature of complex technosocial systems (Mitchell 2009) means that making any predictions about our posthuman future remains impossible. We know from systems theory that the members of a group interact with each other and with the group-as-a-whole, the entity they co-create with their synergies (Agazarian and Gantt 2000). Based on this premise, we can assume that in an artificial-human intelligence entanglement, the evolution of the participating actors, either humans or machines, does not depend only on their individual actions or interactions. The parts in such entanglement would also interact with the emerging collective entity the very moment they co-create it.

This could give rise to a nomadic body in the form of a self-organized whole; a rhizomatic postdigital intelligence. The nomadic conception of the body challenges the traditional idea of the body as an isolated physical entity, self-bounded by the skin. The nomadic body is a relational body that extends into the world around it: ‘all that I call “my body” belongs to the larger world out of which it is but a transient conglomerate’ (Gergen 2009: 97). As a transient confluence of co-active entities, the nomadic body is always in the becoming, always between, ‘intermezzo’ (Deleuze and Guattari 1987), participating in a performative dance of actions (Pickering 2010).

In a recent article, Runco (2023: 1) argues that ‘artificial creativity may be original and effective but it lacks several things that characterize human creativity. Thus it may be the most accurate to recognize that the output of AI is a kind of pseudo-creativity.’ In this article, I set the question at a different level. I suggest that regardless of whether modern AI can or cannot be creative on its own, the combination of human and artificial intelligence can give rise to a new operational entity capable of further evolving human creativity (although this premise implies a linear trajectory to creativity).

For example, in Part G of my conversation above with ChatGPT, I exclaimed with surprise, ‘I have to admit that now I am shocked. Because through our conversation we reached the point that you were able to provide a creative synthesis that I had made myself a couple of years before.’ The reader can conclude that, as I knew the subject before and I was trying (consciously or unconsciously) to confirm some of my own ideas and premises, I somehow ‘manipulated’ ChatGPT by leading it to develop a specific synthesis of concepts. In this sense, this was a case of pseudo-creativity. Nevertheless, having a personal lived experience of the capabilities of generative AI and its affordances, I can now use it to combine concepts in a more creative way to reach new, unknown to me or unprecedented syntheses. Of course, there is a possibility that a Large Language Model is hallucinating, producing false statements and presenting them as facts. For this reason, at the present state of development, LLMs should be used to combine content and create new scientific hypotheses but not as knowledge databases supposed to contain fact-checked information (Truhn et al. 2023).

This perspective aligns with Joshi’s (2023: 26) view on the coevolving relationship between technology and art: ‘Technology and art are deeply intertwined. Tools in art are a mechanism to take human intent and magnify it.’ Perhaps ChatGPT and similar AI tools, if used appropriately, can magnify human creativity as well. Complex systems are irreducible (Trinchini and Baggio 2023), in the sense that it is impossible for someone to understand the properties of the whole by merely understanding the properties of its parts. Therefore, the new operational (albeit distributed) whole that is produced by combining human intelligence with ChatGPT bears new emergent properties that cannot be explained by the properties of the constituent parts.

For example, by using generative AI to combine existing concepts, I (or we if I continue to anthropomorphize the AI algorithm) can produce new ideas, concepts, or scientific theories at an unprecedented pace. Some of those would be cliches, some mere hallucinations, but others could be quite innovative and meaningful, as perceived by me, the human observer. This ability to generate an array of possible outcomes and then select the meaningful ones is an ability that pertains to the human–machine partnership.

LLMs, like ChatGPT, work by training on a vast amount of textual data and then generating responses by (effectively) predicting the most anticipated (by the human users) answers. These models do not actually think or have a mind that resembles in any aspect a biological one (Bishop 2021). However, as became evident from the previous dialogue in this article, generative AI algorithms can appear to perform as intelligent entities.

The creative potential of a human–machine partnership operates at two different levels. The first level pertains to the human side, involving complex emotional and cognitive processes. Dialogue is transformative (Penn and Frankfurt 1994): to verbally communicate my thoughts, I must first engage in a constructive internal dialogue, reflecting on my own feelings and thoughts, recalling, choosing the most appropriate and desired words, doubting, thinking, and questioning. My brain often cannot distinguish between speaking to another human who processes information semantically or chatting with ChatGPT, which processes information statistically. As Sætra (2022: 32) argues, the problem is that artificial chatbots often appear to us as if they have feelings (or at least they can be perceived as such, depending on the observer) and ‘these as-if performances are effective enough for us to perceive them as real.’

The second level refers to ChatGPT. In our dialogue, I asked ChatGPT whether its answers were original, and ChatGPT responded:

I can generate new combinations of information and ideas based on my training data, but my responses are not necessarily original or truly creative in the sense that a human might understand creativity. I am simply recombining existing information in new ways to answer questions.

While not original, a specific recombination can still be creative and innovative (as this could be perceived and realized as such by a human observer), leading to new insights and understandings. For instance, in Borges’ famous Library of Babel (1998), each new synthesis and combination of words forms a distinct book.

Second-level synthesis involves a combination of a human and an artificial entity—a combination of existing combinations. This synergistic process has the potential to further boost creativity and bring us closer to a singularity moment. However, there is an important caveat. As ChatGPT stated in our dialogue:

As an AI language model, my answers are generated based on patterns in the text I have been trained on, and the probability of a particular response is determined based on the input I receive. While I am designed to generate high-quality, informative answers to the best of my ability, the answers I provide can still be limited by the information I have been trained on and the algorithms I use to generate my responses. In terms of creativity, I can generate new combinations of information and ideas based on my training data, but my responses are not necessarily original or truly creative in the sense that a human might understand creativity. I am simply recombining existing information in new ways to answer questions.

With the potential to generate so many new combinations of existing information and ideas, the challenge lies in acquiring valuable and accurate knowledge (meaningful to the user and not LLMs’ hallucinations) (Rosenbusch et al. 2023), all the while avoiding a situation where humans will be unable to distinguish true knowledge from trash, which may be the case right now.

Communication engineers use a measure to assess the quality of a communication channel: the signal-to-noise ratio. If important and valuable information is buried in a sea of entropic nonsense, the overall value of the communication diminishes because a communicating agent also bears the cognitive burden of distinguishing what truly matters. Passmore and Tee (2023: 1) suggest that ‘the challenge for the library user of Babel, and future users of an AI-based Internet, will be sifting the wisdom from the nonsense and discerning the truth from the lies’. In the context of scholarly production, there is an additional risk: that the creative potential of a human–machine partnership, if appropriated in a wrong way, may be channeled ‘to further exacerbate anxieties of publication volume faced by academics amidst a mandate to publish or perish’ (Watermeyer et al. 2023: 12).

Scholars have emphasized the need to cultivate a new set of digital literacies to be able to navigate social media and digital landscapes effectively and efficiently (Kalantzis and Cope 2023). We now need to add extra literacies, but that will not happen automatically. For example, people now need to be generative AI-literate, understanding the potentials and risks of such technologies, knowing how to use LLMs as learning tools to support their work, and being able to identify LLMs’ hallucinations and acknowledge ethical issues. The famous claim from a few years ago about the existence of digital natives has been proven to be a misleading myth (Bayne and Ross 2007); by extension, there are no AI natives. Self-reflective practices are critical, and our digital future is a choice (Russo 2023), although a choice that is not free of constraints. We need to actively engage in facilitating the future we want (Selwyn et al. 2023). A first step in this direction is to deepen our knowledge of how interactions with generative AI algorithms are experienced by humans and how we can engage in such interactions in a personally meaningful and creative way. My engagement with ChatGPT in this paper serves this purpose.

Regardless of whether today’s LLMs should be considered truly intelligent and creative, at least in the same sense as humans, the critical question now is about the human-artificial synergies. This paper shows that an AI chatbot such as ChatGPT, if used appropriately (in the slow-paced, ethnographic, and contemplative way described in the methodology section), can serve as a cognitive enhancer, a means to magnify human creativity and intelligence. This is in line with the idea of the human mind as constantly being extended into the material world (M. Jones 2023). Therefore, it is possible for humans to produce valuable and original knowledge by using AI as a partner in a meaningful interaction (Schmidt and Loidolt 2023).

LLMs and ChatGPT in particular can successfully serve this role, even if we think of them as mere ‘stochastic parrots’ (Bender et al. 2021). However, ChatGPT is more than a stochastic parrot, as it can produce novel content in arbitrary questions beyond its training data following the questions and imports provided by humans (Adesso 2023; Arkoudas 2023).

So, who holds the ownership of (co)produced knowledge? This question is problematic per se, as it implies that a generative AI algorithm could have a mind and should be considered a physical and/or legal entity. However, despite the fact that LLMs are not living entities and do not have minds, there are still ethical issues raised. Is it ethical to use this technology to (better) conduct/perform academic research or create course materials? In this context, ‘the very nature of what counts or rather what could or should count as academic work is called into question’ (Watermeyer et al. 2023: 13).

Tigre Moura (2023: 336) calls for a new paradigm considering ‘the synergetic co-creative relationships between humans and AI as equally genuine as the ones not using them.’ Such questions are critical as human-artificial synergies would have many unexpected consequences across all domains of human activity and life. Given the definition of a cyborg as ‘a hybrid of organism and technology that augments the organism with capabilities (extended or new)’ (Wells 2014: 7), generative AI can be realized as a cybernetic extension of the human body. In a more advanced way, modern artificial and human intelligence can be thought of as constituent parts of an integrated rhizomatic posthuman intelligence in the becoming (see Fig. 6).

Fig. 6
figure 6

ChatGPT: ‘One could imagine a future in which human beings and technology are integrated into a decentralized, constantly evolving network of relationships and connections that reaches a state of maximum complexity, consciousness, and unity.’ An image produced with the DALL-E AI platform by OpenAI. The textual prompt used by the author to produce this image was: ‘A vast network of interconnected nodes being humans, robots, and artificial intelligence agents forming a rhizomatic super intelligence, oil painting’

According to Watermeyer et al. (2023: 13), there is the possibility for AI entities to be used ‘to circumvent the need for research collaborations, supporting academics as solo-researchers, and importantly from an outputs perspective, as single-authors.’ If the idea of a postdigital reflective ‘buddy’ to engage with in a contemplative dialogue, a postdigital duoethnography shifts to that of a silo-researcher that does not interact with other humans, this could pose a significant threat to the inherently human relational and dialogical practices and the potential for a nomadic posthuman intelligence to emerge (as the byproduct of the synergies among the co-active parts, being both humans and non-humans, participating in a postdigital confluence). Research collaborations are critical for scientific knowledge production as ‘science involves many people dispersed in space and time whose collective product is greater than the sum of its individual constituents’ (Fuller and Jandrić 2019: 198).

In this article, I argue that the integration of AI entities in the process of knowledge production could be an asset by extending the network of the available research collaborators (I intentionally continue to anthropomorphize here the content produced by generative AI algorithms) and, therefore, the collective’s potential to demonstrate emergent properties and novelty. However, this will only occur if the integration of AI algorithms does not result in the diminishment of human collaborators.

How much novelty can humans handle? How much information can we digest without becoming overwhelmed and disorganized by its volume? We already know that the phenomenon of information fatigue can occur when the volume of information surpasses an individual’s ability to effectively process and make sense of it (Bawden and Robinson 2020). We live in an era of unprecedented scientific production, where there’s ‘more stuff than can be reasonably read’ (Fuller and Jandrić 2019: 200), and AI poses both a solution and a threat. AI agents could help human actors to handle and digest existing literature. However, if by relying solely on the support of an AI aid, humans feel they don’t need any more other human collaborators, this may raise concerns.

In a complex systems view, knowledge can be conceptualized as an emergent property ‘from the webs of interconnections between heterogeneous entities, both human and non-human’ although those entities ‘cannot be treated as completely symmetrical for research purposes because of the particular access that we have to accounts of experience from human actors’ (Jones 2018: 51). Though we may not have definitive answers at this point, it is critical to start by asking questions that will facilitate further exploration.

Postdigital Complexity

As AI technologies continue to advance at a rapid pace, humans today engage more and more in human-non-human relational and synergistic entanglements (Pente et al. 2023). In this way, humans gradually become posthuman subjects in a postdigital world (Fig. 6), facing new possibilities as well as unforeseen challenges. Understanding the impact that technology, and especially AI, has on human lived experience, research practice, knowledge production, and education becomes critical. ‘We can safely dispense with the question of whether AI will have an impact; the pertinent questions now are by whom, how, where, and when this positive or negative impact will be felt’ (Floridi et al. 2018: 690).

While AI-assisted research practice and scholarship can catalyze the emergence of new insights and discoveries, as demonstrated in my conversation with ChatGPT, it also raises questions about the validity and reliability, as well as the intellectual ownership, of the data produced (Tigre Moura 2023). This is particularly true regarding the potential for often invisible (intentional or unintentional) systematic biases in the algorithms used to collect and analyze the data and produce the results (Tsamados et al. 2021). It also raises concerns about the risk of creating a culture of individualized knowledge production by exacerbating ‘the toxic-masculinity that has become synonymous with hyper-productive working and may even come to further normalise a culture of overwork’ (Watermeyer et al. 2023: 13). In this context, we need to consider the ethical implications of employing AI technology in research, education, art, and other domains (Christou 2023; Pente et al. 2023). We must somehow ensure that these advancements are used in ways that are considered ethical, promote scientific rigor, and contribute to human and non-human well-being and flourishing (Floridi et al. 2021; Gill 2023).

The need to consider the potential implications of using AI in various domains of human activity becomes evident in the lawsuit that San Mateo County Board of Education filed against social media companies, stating that:

Using perhaps the most advanced artificial intelligence and machine learning technology available in the world today … purposefully designed their platforms to be addictive and to deliver harmful content to youth. For the youth targeted by social media companies and for the adults charged with their care, the results have been disastrous. Across the country, including in San Mateo County, a youth mental health crisis has exploded. Excessive use of the YouTube, TikTok, and Snap companies’ platforms by children has become ubiquitous. And now there are more children struggling with mental health issues than ever before. Suicide is now the second leading cause of death for youths. There is simply no historic analog to the crisis the nation’s youth are now facing. (San Mateo County Board of Education 2023: 1)

Are these mere technophobic statements from people who fail to realize the complexities of modern societies? Or are we really underestimating the side-effects of AI algorithms on human mental health and well-being? There is an urgent need to devise and integrate a postdigital and posthuman perspective into scientific research and education. This means critically exploring the complex relationship between humans and technology, while acknowledging that the ways in which AI technologies are developed and used are fundamentally transforming all aspects of human lives, raising many ethical dilemmas (Park 2023; Ploug 2023).

We live in an era where the posthuman social subject becomes personally and professionally exhausted while attempting to balance opposing tendencies and devise meaning in a rapidly changing and demanding sociocultural context (Bauman 2005). But, ‘how can human beings, sometimes creative, sometimes tired, and sometimes sick, collaborate with tireless, imperfect but ever-evolving, artificial intelligences?’ (Jandrić 2023b: 2).

In this liquid postdigital reality, we need a shared vision to direct the power of AI for the good of society (Kubzansky 2023). We also need to avoid a use of AI that will ‘intensify performance-based stratification and structurally racist and gendered hierarchies within universities, while also raising important ethical and even litigious questions of responsibility’ (Watermeyer et al. 2023: 13). Digital society is a choice (Russo 2023), so we need to proactively promote our posthuman technodigital well-being through critical use and integration of AI technologies in knowledge production (Watermeyer et al. 2023). There is a need to shift from an autopilot mode of living where we just react to each new technology to a new mode of living in which we reflect about our actions and their implications.

Heraclitus argued that no person can ever step into the same river twice, because in every attempt, they would be a different person in a different river. Cratylus, a student of Heraclitus, took this idea further: A person cannot step into the same river even once, as both the person and the river change because of the act of stepping in (Allan 1954). A human-artificial intelligence partnership is always in the becoming; it is a reciprocal process of determination that fundamentally changes everyone involved. We provide AI systems with the initial programming and then artificial intelligences train on their input data to make independent decisions (Jandrić 2020). Simultaneously, human brains are constantly rewiring themselves to adapt to the requirements of their environments that now include AI systems. Thus, we shape AI at the very moment that AI shapes us.

There is a well-known neuroscience principle related to the brain’s plasticity: ‘neurons that fire together wire together.’ This idea can also be used to describe the interbrain plasticity that takes place in a social learning situation: ‘brains that fire together wire together’ (Shamay-Tsoory 2022: 543). In a postdigital society, where human and artificial intelligence actors coexist in a relational confluence (Gergen 2009), we may assume a similar principle: intelligences that fire together, coevolve together and wire together, forming an emergent, posthuman, and rhizomatic superintelligence. But how can we leverage our relationship with AI tools to allow for a better future to emerge? ‘Like many times before, it is our responsibility to shape the technology and our coexistence with the technology in ways that correspond to a wider vision of what kind of world we would like to inhabit in the future’ (Peters et al. 2023: 11).

This article employs a complex systems view on the human–machine interplay based on principles such as interdependence, coevolution, group-as-a-whole dynamics, entropy, and dissipative structures (Davis and Sumara 2006). In an era where the postdigital becomes the dominant narrative of our world (Fuller and Jandrić 2019), the human mind will never be the same again. However, it is not given how human beings are going to coevolve as parts of a complex postdigital confluence. As H. G. Wells said, ‘[c]ivilization is in a race between education and catastrophe’ (Wells in Robinson and Aronica 2015). So, what comes next?

Capra (2003) argues that the network is the very pattern of life. Therefore, as far as knowledge ‘has a relational character and it does not exist externally and independently’ (Jones 2018: 44), we need somehow to interconnect our individual ‘islands’ to form networks of development, hope, and empowerment, keeping in mind that AI will always have some form of agency. This is a totally new technobiosocial landscape where ‘in some cases, traditional theories will serve us well; other cases require complexity and nuance offered by postdigital theory’ (Jandrić 2023b: 7).

The concept of confluence, as introduced by Gergen (2009), suggests that collaborative co-action is a fundamental process for human meaning-making. Confluence is defined as a process of reciprocal determination among a web of entities. As such, it challenges the traditional paradigms of studying linear cause-effect chains to explain human behavior: ‘Ethnography takes precedence over experimental manipulation. We shift from influence to confluence’ (Gergen 2009: 58). While confluence does not rule out prediction, it promotes a more ethnographic approach, studying the pattern that connects (Bateson 1972) the participating actors to estimate the behavior of the parts. Therefore, a postdigital duoethnography focused on facilitating and studying the interaction between a human researcher and non-human generative AI algorithm produces a different kind of knowledge:

Understanding in terms of confluence is never complete. Unlike the misleading promise of scientific certainty, we must remain humble. This is so, in part, because what we take to be a confluence owes its existence to we who define it as such. One might say that a confluence is essentially ‘an action’ for which our supplement is required in order to bring it into being. (Gergen 2009: 59)

Complex living systems depend on initial conditions, and the slightest differentiation in those conditions can result in a totally different outcome (butterfly effect). Therefore, prediction becomes impossible (Mitchell 2009). In this context, the Appreciative Inquiry research framework argues for a bringforthism mindset, where humans intentionally imagine and visualize their desired futures to help them come true (McAdam and Mirza 2009; Tomm 2020). Humans should first imagine the kind of relationship they wish to have with technology, and then take necessary actions to create this future. To this direction, this article explores how a postdigital ethnography may help us better understand the evolving affordances of AI technologies and how these affordances impact human life, aiming to co-create a more empowering postdigital narrative of science and education for the future to come.