Preventive mental health care: A complex systems framework for ambient smart environments

We offer a framework for the design and use of Ambient Smart Environments (ASEs) for preventive mental health care support. Drawing from Complex Systems Theory (CST) and ‘E ’ Cognitive Science (ECS), we claim that ASEs have the potential to act in a preventive capacity in support of good mental health, i.e. supporting dynamics that avoid so-called “ struck states ” (which are, according to CST, thought generally to underpin forms of psycho-pathology). Here, we frame our discussion with what has recently been termed the “ mind-technology problem ” . We define and characterise ASE systems, present some examples, and briefly survey some existing theoretical work. After introducing the essential CST terminology, the paper goes on to apply CST to explain developmental adaptation to continuously changing (smart) environments. Understanding the ASE ’ s navigation in terms of a dynamic geometry between attracting and repelling points (or local minima/local maxima), allows us to develop neurotechnology that can augment clinical interventions by predicting upcoming shifts for good symptomatic outcomes, i.e. when a preventive intervention (i.e. destabilisation) should take place. We further offer clear directions for the development and design of such neurotechnology.


Introduction
Recent work in theoretical cognitive science has identified and characterised the 'mind-technology problem' (Clowes, Gärtner, & Hipólito, 2021).More accurately described as a constellation of associated problems and questions emerging from the increasing prevalence of artificial intelligence, in its most general form it can be formulated as follows: If the nature of our minds is radically open to technological interaction as we here suggest, then the fact that we are now co-existing with an increasingly varied array of smart machines needs to be factored into a revised self-understanding (Clowes et al., 2021. pp. 19).
Inheriting from Descartes 'mind-body' problem, the mind-technology problem asks how we can understand the mind and its boundaries in a world where technology is increasingly becoming part of how we think, how we feel, and how we engage with the world and others in it.Addressing questions around our self-understanding in relation to technology seems all the more pressing in cases, such as the one we will be addressing, where technology is being used to significantly change the way we can engage with the world, in ways designed to alter the structure of our experience and to potentially impact our mental health and wellbeing.Especially relevant examples of this kind of technology are the forms of artificial intelligence and machine learning that gather data, learn about their users, and act with impressive degrees of autonomy and flexibility.Consider the ways in which modern social media platforms learn about their users, and then use sophisticated algorithms to restructure the possibilities for engagement and interaction.As these forms of technology take up a greater role in society, acting in more consequential domains, they will have a greater impact on the shape of our society and on our psychological health.Already these technologies play a consequential role in the lives of most of us.It's important to also think about the way we approach such questions; approaching from the perspective of the mind-technology problem allows us to ask these kinds of questions in ways that resist reductive, techno-determinist accounts of technology that depict technology as a unidirectional power, often in overly optimistic or overly pessimistic lights (for criticisms of techno-determinism see: Dafoe, 2015;Vallor, 2016;Orben, 2020).The standpoint of the mind-technology problem puts the onus on our agency as the ones who design and engineer technology, asking us to resist the temptation to see technology as an unassailable alien force, and to instead engage our own agency and imagination.
In this paper we start to think about these questions in regard to ambient smart environments (ASEs).We address the remarkable shift that ASEs represent in how technology will be used, and the capacity of influence that it will have over its users, arguing that from the standpoint of the mind-technology problem, such technology presents unique and very powerful characteristics.Briefly, ASEs are systems made up of multiple, networked smart technologies that permeate a particular environment like a home or a work office.These systems operate in the background, characterised by automaticity, anticipatory dynamics, and lack of interface.We'll say more about these systems and offer several examples in Section One.We present an account of ASEs as having a unique potential, able to intervene on and support particular behavioural routines and habits in a way that moves beyond more "traditional" forms of technology.This potential, we argue, means that ASEs are well placed to offer a preventative bulwark against the sort of suboptimal behavioural routines that very often characterise a deterioration in mental health and general wellbeing.To develop this picture, we draw on the conceptual and theoretical toolkits of Complex Systems Theory (CST) and 'E' cognitive science (ECS) that allow us to visualise an agent's behaviours as a state space composed of attractors and repellers a behavioural topography featuring various peaks and troughs that an agent is less or more likely to visit as they enact their world.
In Section One, we introduce the necessary key terminology, providing a complete definition of ASEs and providing examples of how they might function in the real world.Also in Section One, we outline the fundamentals of CST and ECS, defining the core theoretical tools that we'll apply in characterising ASEs and their supporting role in behavioural dynamics.In Section Two we unpack the potential of ASEs using CST, using examples to illustrate their role in shaping behaviour, and emphasise their potential to provide highly individualised behavioural support systems.CST itself is a framework that emphasises a focus, in therapeutic settings, on the specific 'trajectories' of an individual, to predict opportunities for the most effective interventions (Hayes & Andrews, 2020), i.e., augment interventions by predicting upcoming shifts in symptom structures for good symptomatic outcomes (de Felice, 2014;Jamalabadi et al., 2022).In Section Three we introduce an interesting and important conceptual tension, between uncertainty and trust.Very briefly, this tension emerges from the necessity of developing technology that we can trust (in other words, that we are able to predict and happily endorse), and the therapeutic potential we identify in technology in virtue of its capacity to disrupt stuck states and (pathological) routines through introducing uncertainty and surprise.In other words, at least prima facie, it seems like we want technology that is both predictable and unpredictable, and navigating this apparent contradiction will be a key question in understanding how ASEs can be a force for good in terms of human wellbeing.

Ambient Smart Tech
The term 'smart' was originally an acronym standing for "Self-Monitoring, Analysis and Reporting Technology", but smart technology is now defined as much by its capacity to monitor and track its user, through a process of data gathering, pattern recognition, and profile construction.Smart technology monitors the behaviour of users: their likes, interests, and regularities, and uses this information in a variety of ways.This often involves the use of sophisticated algorithms that can perform functions such as governing how content on a particular platform will be made available, based on how the user has interacted with content in the past (Lazer et al., 2021;Anderson, 2022).A concrete example of this is Facebook's 'EdgeRank' algorithm (for a detailed breakdown of how EdgeRank has been deployed in the past, see Bucher (2012)).Smart technology is generally characterised by a degree of autonomy, an ability to learn and adapt, and by its potential for connectivity with other devices (internet of things) or to the internet.Examples of smart technology that are familiar to most of us include smartphones or smartwatches, that make use of a range of applications to perform different functions, and home smart speakers that obey voice commands and can connect to the internet to provide information, perform simple tasks like setting reminders, or reading us the news headlines in the morning.Nevertheless, smart technology is generally reliant on a user's intentional engagement through a well-defined interface; to use the apps on my smartphone, for example, I still need to pull it out of my pocket, unlock it, and jab and swipe at it with my finger, intentionally engaging the features that I deem to be relevant at any given time.
Ambient smart technology, on the other hand, is technology that has receded into the background, eliminating a point of deliberate and intentional use through a clearly defined user interface (e.g., by intentionally prodding a smartphone screen).The primary design objective of such systems is to make the technology largely invisible, operating quietly and automatically, making interventions when needed.This means that the technology is often embedded within everyday things like fridges, diaries, tables, even into floor tiles, paint, and clothes (Heylen, 2012).One easy example of an ambient smart technology is a refrigerator that, using cameras and a connection to our online supermarket account, monitors and tracks the things we buy, and then begins to automate food orders and suggest recipes. 3There are examples of ambient technology already at work in most of our lives, such as key fobs that automatically open doors as we approach, and browser windows that will automatically stop playing video if we switch to another screen. 4Ambient smart environments (ASEs) are material environments, like homes or offices, that are infused with multiple, connected, ambient smart technologies of this kind, all working together in a quiet and automatic way.
A likely scenario in the near future is one in which people increasingly find themselves inhabiting ASEs -fully smart homes and offices marked by an abundance of networked ambient smart technology, working quietly and constantly to learn, monitor, predict, and adapt to the needs of the occupant.An ASE will gather information from laptops, refrigerators, work planners, toilets, sinks, treadmills, entertainment systems, and so on, until it is able to build a picture of behavioural regularities and causal relationships so subtle and complex, that they might never have been accessible or identifiable to even the most perceptive or self-reflective human agents alone.Building on the existing smart fridge, mentioned above, it is easy to imagine a near-future scenario wherein a smart bathroom will weigh us without us intentionally using scales, will monitor body composition, monitor our skin for potentially concerning abnormalities, and our toothbrush will monitor our gum health.Such information could then, if we allowed it to, be shared with the technology in the kitchen to influence the groceries that are bought in, or with our personal trainer, who can then use it to craft a personalised workout plan.In other words, ASEs will be capable of seeing and monitoring patterns in behaviour that humans can't see.And on the basis of this learning, the ASE is then well placed to begin making highly individualised and targeted interventions.
recent work in philosophy of cognitive science (Hansain, Gaur, & Shukla, 2021;Singh & Kumar, 2021;Bruineberg & Fabry, 2022;Andersen, Aflalo, Bashford, Bjånes, & Kellis, 2022), while the topic of subjective wellbeing is now a topic of interest in computational cognitive science (Miller, Kiverstein, & Rietveld, 2022;Smith et al., 2022).Understanding the interaction between the two -between smart technology and wellbeing -has been a driving question behind numerous empirical studies, but as Amy Orben has noted explicitly, the lack of a coherent theoretical framework for studying human-technology interaction in this context has proved troubling (Orben, 2020).
While no consistent framework for studying technology's impact on wellbeing has been agreed upon, Orben (2020) suggests that a good candidate framework is the language of 'affordances.' Although not yet widely used in discussions about wellbeing and technology specifically, the conceptual toolkit of affordances has proven useful for work on human-technology interaction and design (e.g., Norman, 2013).The concept of an affordance was introduced by J.J. Gibson in his work on ecological psychology (Gibson, 1979), to describe how organisms perceive objects and features of the environment in a fully actionorientated way.Affordances are perceivable opportunities for action emerging from a relationship between the material structure of the environment and the embodied abilities and learned skills of the agent.They are, therefore, neither purely objective or purely subjective (Gibson, 1979), emerging from "animal-environment" systems, belonging to neither the organism nor the environment alone (Chemero, 2003).For example, for most humans, door handles afford grasping, chairs afford sitting, and wine glasses afford pouring and drinking.Rock faces afford climbing not to most humans, but to the select few that have developed a particular set of skills.
Thinking in terms of affordances has proved useful as a framework for thinking about human-technology interaction and design, primarily because it shifts a focus from what a technology is, to what that technology allows us to do; from a standpoint that potentially sees technology as monolithic, to a stance which focuses on skilled action and behavioural dynamics (see e.g., Norman, 2013;Evans, Pearce, Vitak, & Treem, 2017;Orben, 2020).However, as recent work on smart technology and wellbeing has pointed out, the language of affordances may be less well-suited to capture our interactions with ASEs (White & Miller, 2023).Indeed, when the primary design strategy for a system is to have the users do as little as possible -to not even know that the technology is there -we might have to think again about the best theoretical framework for analysing ASEs.In order to better characterise the role ASEs could play, White and Miller draw on recent work on affordances that strengthens the framework by distinguishing between an agent's field of affordances and an agent's landscape of affordances (see Bruineberg & Rietveld, 2014;Rietveld & Kiverstein, 2014;Bruineberg, Chemero, & Rietveld, 2019;Bruineberg, Seifert, & Rietveld, 2021).In short, a landscape of affordances is all of the affordances that exist within an agent's broad ecological niche, e.g., all of the affordances that exist within the city in which the agent currently resides.A landscape of affordances is broad and likely to change only on relatively slow timescales (although this isn't necessarily the case).My landscape of affordances right now is the city of Brighton, and unless I find myself jumping on a plane, what the city offers me in terms of my capacity to do things is unlikely to change rapidly.A field of affordances on the other hand, refers to those affordances that are local, inviting, and that standout as relevant and useful at that particular time (Withagen, Araújo, & de Poel, 2017).An agent's field of affordances will shift on potentially rapid timescales and will be more or less stable or volatile depending on the circumstances in which the agent finds themselves (Bruineberg & Rietveld, 2014).Right now, my field of affordances includes those opportunities to act that stand out and exert a "pull" over me: my computer and keyboard, the coffee mug next to me, for example.However, were the fire alarm to suddenly sound, my field of affordances is likely to shift, with me being drawn out to the corridor to investigate and potentially using the fire exit to evacuate.This is a powerful distinction, and it speaks to the inherent affectivity and normativity that characterises the phenomenology of actually engaging with our worlds.
Using this distinction, White and Miller (2023) suggest that ASEs are best understood as a meta-affordance, operating upstream to sculpt an agent's field of affordances.In this account, the authors introduce the term 'meta-affordance' to describe a system that acts to alter and shape an agent's field of affordances, although a meta-affordance isn't to be thought of as an affordance in and of itself.So, according to this account, the question to ask regarding the role of ASEs isn't what they allow us to do, but how they shape the available opportunities for doing, and how this in turn influences what an agent is more or less likely to choose to do.White and Miller go on to argue that this role as meta-affordance highlights how ASEs can work as a form of distributed metacognition, through a targeted and strategic restructuring of available affordances in a material environment to influence an agent's attention and awareness to ultimately better support the optimal control of action.The claim that metacognition can be as much a process distributed across features of the physical environment might seem unintuitive, but the identification of these interventions as forms of metacognition has precedent in existing work that argues that metacognitive processing is very much more of a scaffolded and distributed phenomenon than is generally taken to be the case, and that metacognitive processes can be supported or degraded through practical design decisions regarding how affordances are structured (Kirsh, 2004).For Kirsh, the relationship between cognition and metacognition is one of a "continuum", rather than of two distinct processes, and while ensuring that the affordances in an environment are organised optimally can scaffold task performance itself, it can also impact the way that we monitor and repair the base-level taskrelevant cognitive processes.For example, in describing this within the context of learning environments, Kirsh notes how the placement of a clock and the ability to flick ahead in a book to see how many pages we have left to read relative to the time available are both abilities reliant on structures in the material environment, that clearly support metacognitive processing.Working within an ASE would provide more opportunities like this.For instance, an ASE could monitor things throughout a workday, learning the patterns that reflect when we get tired, when we're most likely to be distracted, or stressed, and then provide reports or make suggestions for changes to our workflow.Indeed, work by Heath and Anderson shows how our capacity to sustain the willpower to stick with a task is itself a distributed process (Heath & Anderson, 2010).By carefully monitoring patterns in data and then presenting affordances in response, ASEs can perform metacognitive functions such as guiding our attention back to an important task, reminding us to take regular breaks during intense work periods, or offering feedback on the kind of regularities that result in us feeling overly stressed and anxious.
Understanding ASEs as functioning in this way is to understand ASEs as having the potential to support long-term patterns of optimal behaviour, and to help us avoid falling into patterns -entrenched through a lack of control and awareness -that lead to us experiencing a deterioration in subjective wellbeing (Miller et al., 2022).This, to be clear, is precisely the potential for preventive mental health care that we will unpack in this paper.This means that ASEs can be understood as directly supporting efficient allostatic control, which is essentially the avoidance of prolonged states of stress through optimal anticipatory action.Efficient allostatic control is exhibited by an agent who effectively adapts cognition and behaviour to meet external demands and perturbations through time, in service of maintaining homeostatic states (Barrett, Quigley, & Hamilton, 2016;McEwen & Wingfield, 2003;McEwen, 1998;Sterling & Eyer, 1988).Human agents are capable of maintaining allostatic control through sophisticated and nested action policies planned over impressive timescales of years and decades.For example, if I'm setting out on a weekend of camping and hiking, I'll pack waterproof and cold weather gear, even if the forecast is good, just in case the weather takes an unexpected turn.Similarly, I might pay a small amount of money each week into a savings account I will not access for a B. White and I. Hipólito decade, thus increasing the likelihood of financial stability in the future.The suggestion made by White and Miller (2023) is that ASEs can support this kind of planning and action for allostatic control by making interventions that either support or suggest optimal changes or by actively disrupting routines that have become ineffective or too entrenched and inflexible.
It's worth noting that the claim that ASEs will be able to identify suboptimal behavioural patterns in such a way to be able to effectively intervene speaks to a set of technical challenges, and it is worth lingering for a moment to highlight these.As we see it, there are two components that an ASE could employ to assess an individual's behaviour as in some sense 'sub optimal': the first component is simply the observed behaviour of the user, but this raises a difficult contextualisation challenge; how will the system know which behaviours are problematic in what context?One obvious solution is to explicitly tell the system what optimal looks like.For instance, I could tell the ASE that my goal for the new year is to consistently eat healthy, non-processed food, and to exercise regularly, and given this explicit information, the system would then have a grounding for modelling the hidden causal regularities that might help or hinder me, such as noticing that the likelihood of me working out is inversely correlated with some obscure variable that I myself might be unlikely to notice.However, a more detailed answer to this problem emerges in the next section on Complex Systems Theory.In short, the ASE will be trying to identify larger-scale patterns that display a lack of flexibility, rather than any specific behaviours in the short term.The second component the system will use in actually identifying areas for intervention is wearable technology that communicates useful physiological data.However, therein lies a technical challenge about how physiological data can be translated into a schema that the system can actually use to identify a user's needs.
In the work by White and Miller (2023), this capacity to support and disrupt is cast in the terms of the Skilled Intentionality and Active Inference Frameworks, focusing on the way agents weight their beliefs in order to best minimise uncertainty.The connection to wellbeing here is clear and explicit: the ongoing state of failing to maintain allostatic control takes a terrible toll on an organism's biological machinery (known as allostatic load), leading to a range of health problems (see for example : Seeman, Singer, Rowe, Horwitz, & McEwen, 1997;Koob & Le Moal, 2001;Barrett et al., 2016;Guidi, Lucente, Sonino, & Fava, 2021;Arnaldo, Corcoran, Friston, & Ramstead, 2022;Rabey & Maloney, 2022).In this paper, we maintain a clear connection between this capacity to manage allostatic load, and the notion of preventive mental health, in the sense that allostatic load is a measure of stressors that lead to general decline in wellbeing.
This picture of ASEs as a meta-affordance for optimised metacognitive control on action, facilitating smoother coping in maintaining allostatic control, is a fruitful theoretical move that brings the potential of ASEs into focus.However, in this work we show an application of concepts from CST can augment this theoretical approach and help accentuate the potential strengths of ASEs in supporting good psychological health.In the following subsection we introduce the necessary terminology of CST, and then go on to offer a CST-based picture that emphasises the potential ASEs have to intervene on behavioural patterns that have become, in some sense, suboptimal.Whereas White and Miller (2023) account is focused more on ASEs' capacity to scaffold cognitive processes, applying a CST perspective will highlight the importance of breaking up some behavioural patterns in service of overall wellbeing.

Complex systems theory
To provide the essential terminology and context for the discussion in Section Two, we now introduce the fundamentals of Complex Systems Theory (CST).CST itself is a highly interdisciplinary research program that aims to properly characterise the sets of interactions within systems that give rise to larger-scale emergent properties.For example, how interactions between individuals and businesses give rise to the dynamics of the global economy, societies, psychological behaviour or how the interactions between neurons and neural regions give rise to psychological phenomena.One primary goal of CST is to understand how paradigmatic shifts arise in complex systems through multiscale, continuous, and nonlinear processes of self-organisation without external intervention.In the specific context of studying wellbeing and human-technology interaction, CST offers several benefits: firstly, CST allows us to focus our analysis on the individual trajectory of an individual, rather than speaking more generally about symptoms of psychopathology.This focus on individual trajectory is something that the literature on technology as therapeutic intervention has identified as particularly important (see e.g., Sharmin et al., 2018).Second, CST provides a way of emphasising the importance of agent-environment coupling by describing the way behaviours influence the overall trajectory of an individual.CST can also encompass a multilevel analysis that includes social, biological, psychological, and environmental factors.Taken together, these strengths mean that CST can facilitate a significant reconceptualization of what it means to effectively intervene on instances of psychopathology (Hayes & Andrews, 2020). 5 A complex system is one in which many low-scale, more simple component parts give rise to, without any central command unit or overseer, a larger-scale whole that displays emergent properties, like (within the context of cognitive science) apparent intelligence and purpose, collective decision making, and concerted, adaptive, goaldirected behaviour making use of larger scale patterns of information (Mitchell, 2009).In complex systems these properties emerge as a result of a web of multi-level interactions and contingencies, both between the internal parts of the system and with forces external to the system, feedback loops, and nonlinearity (Hipólito, Rosas, & Carhart-Harris, 2022).A paradigmatic example of a complex system is an ant colony (Gordon, 2010;Imtiaz et al., 2021;Moffett et al., 2021;Wheeler, 1911).It is the colony as a wholeas a kind of superorganismthat displays predictable and adaptive behaviour, not the individual ants.Colonies behave as "unitary wholes", resisting entropy and dissipation and maintaining structure through time, reacting as a single unit to external stimuli and even displaying a degree of idiosyncrasy in their behavioural patterns.According to complex systems theory, these behaviours of the colony are said to emerge from a "shifting web of interactions, in which the pattern of interactions is more important than the content" (Gordon, 2010).In other words, what gives rise to the emergent behaviours of the colony is not the specific content of information carried or passed on by and between the individual ants, but rather the unfolding of their interactions over time, including regulation by changing environmental conditions, interactions between individuals, and physiological variation between individuals.In the case of the ant colony these interactions between individuals are often tactile or consist in the accumulation of pheromone trials that signal to individuals which paths have already been taken (Friedman, Tschantz, Ramstead, Friston, & Constant, 2021).
Complex systems are inherently difficult to model due to their numerous interdependencies, interactions, contingencies and competing tendencies, both within the system and in the system's exchanges with the environment.In CST, this is known as degrees of freedom: the complexity of a system increases with its degrees of freedom.Put differently, the complexity of a system increases with the difficulty to computationally simulate and predict the behaviour of a system.An intuitive example is that the behaviour of a pendulum is much easier to model/predict than the behaviour of an ant colony.The crucial concept to understand the degrees of freedom is enaction.Living complex systems, through selforganising dynamics, enact a constant process of resistance against 5 Complex Systems Theory is a mathematical framework for modelling target phenomena in nature, and as such we're construing it in a non-realist way.We simply take it that the conceptual tools of attractors, repellers, and of an individual's dynamic geometry are illuminating for our purposes in that they provide a way to describe the overall behaviour of an agent.
B. White and I. Hipólito natural forces that tend the system toward dissipation, and systems that enact their environment in this way are much harder to model, and, thereby, predict.In other words, complex living systems resist the second law of thermodynamics (Hipólito et al., 2022).The ability of a system to resist dissipation, maintaining their cohesion in the face of perturbation is achieved by regularly visiting a set of viable states, known as attractor states (Ladyman, Lambert, & Wiesner, 2013).Of a system's entire possible state space, those that allow the system to reliably avoid dissipation will become attractor statesstates that the system comes to occupy repeatedly and is drawn to in order to maintain its processes of self-regulation.Attractor states are the result of self-organisation over time and characterise the set of constraints that keep the system within a viable state space (Hayes & Andrews, 2020).The competing tendencies, mentioned above, are characterised by a system having multiple attractor states.Indeed, systems can have any number of competing attractor states, and those attractors can be of varying strengths.The stability of a system can be seen as a function of the number and strength of attractors it has: a system with one, or just a few strong attractors will be relatively stable and predictable, while a system with a number of weak attractors will be difficult to predict and unstable (Cilliers, 2002).In order to have optimal chances for viable action (actions that keep the system tending away from dissolution), a system will try to optimise the dynamics between a number of different attractors.
Unstable systemssystems with only weak attractors -which are chaotic won't last long, as they lack the robustness to resist external or internal perturbations.However, neither will systems whose behaviour is governed by a too limited set of overly strong attractors, as these systems will lack the flexibility needed to adapt to changing sets of demands (like a significant change in the environment).Optimal complex systems are one's that exhibit a robustness and a flexibility, each realised through a delicate balance in the number and strength of its attractor states (Cilliers, 2002;Lambiotte, Rosvall, & Scholtes, 2019).The notion of destabilisation, which will play a key role in this paper, pertains to attractor states.Destabilisation occurs when a system is purposefully shifted out of a specific attractor state, either by external perturbation or by a competing set of internal dynamics.

Complex systems theory and preventative mental health care
The interaction between humans and ASEs can be understood in terms of a dynamical geometry.A dynamical geometry comprises a specific arrangement of fixed points: attracting and repelling points (or local minima/local maxima).Understanding the dynamic geometry allows us to develop neurotechnology that can predict, on the level of an individual trajectory (through a state space), when a preventive intervention (i.e., destabilisation) should take place.CST (coupled with ambient smart technology), this section will suggest, can be employed as individual or personalised therapeutic strategies for preventive mental health care.We further offer clear directions for the development and design of such neurotechnology.This section offers a framework to understand how an agent interacts with ASEs and the extent to which the ASEs contribute to a development trajectory with the intent of preventive mental health care.
Drawing from E-Cognitive Science (ECS) we view agents, i.e., adaptive systems, as coupled with a continuously changing environment (Maturana & Varela, 1991;Hutto & Myin, 2012, 2017;;de Haan, 2020;Heras-Escribano, 2021;Maiese, 2021Maiese, , 2022).An adaptive system is a system that is flexible in response to changing conditions, while at the same time it is able to maintain functional integrity in the face of perturbation (Hayes & Andrews, 2020;Kelso, 2012).Adaptability can be measured, for example, by a balanced reciprocal interaction between the states of the system and the states of the system's environment.The reciprocal influence is fundamental because both the system and the environment are continuously changing, where the coupled reciprocal influence between them plays a fundamental role to the system's states.The formalisms for reciprocal influences have been developed under the Free Energy Principle, which we illustrate below in Fig. 1.
Health, in the context of CST, can be understood through the concept of multistability, which can itself be understood as a dynamic topology of varied fixed points: e.g., a set of multiple activities or states, some of which are seen as attractive and others that are seen as repelling.There is no objective multistability topology, but a continuous unfolding of states whose normativity stems from the ways in which the subject is coupled with the environment (Fig. 1).Normativity here refers to the fact that some couplings with the environment will be better or worse for the agent.For example, if the temperature in the room begins to rise, the rising body temperature of the agent will eventually see the agent aim to occupy an alternative state, either by moving to a different location or by reducing the temperature of the room.For human agents, this normativity is experienced affectively; recent work under the FEP hypothesises that our felt bodily affective states tracks changes in the discrepancy between the actual state and expectations (See e.g., Van de Cruys, 2017;Kiverstein, Miller, & Rietveld, 2020).A multistable topology means that an adaptive system (such as a human agent) is able to maintain health through a robust flexibility in multiple functional patterns that it can switch between, in order to adjust and attune to the demands of the environment (external states).These multiple functional patterns are neuropsychological adaptations that create more sustainable states to respond to the stressors in the environment (Feudel & Grebogi, 1997;Laurent & Kellershohn, 1999;Pisarchik & Feudel, 2014;Leszczyński, Perlikowski, Burzyński, Kowalski, & Brzeski, 2022).A system is in a healthy state if the state is stable to respond to stressors in the environment: it can flexibly switch between patterns and thereby adjust the coupling between internal and external states in a balanced manner (De Figueiredo Barroso, 2020;Wang, Kuznetsov, & Chen, 2021).
Interestingly, multistability bears some conceptual similarity to 'metastability' in dynamical systems theory.Metastability describes a system with the capacity to spontaneously navigate between competing tendencies within a state space, exhibiting a quasi-independent autonomy (Kelso, 2012).In other words, metastable systems are always poised to self-organise, to break away from one set of attractors and reorganise around a different set.Metastability then speaks to a kind of longer-term transience, whereby the system shifts without prompting between one attractor landscape to another.From a computational E-Cognition perspective, metastable dynamics have been used as an explanatory tool for describing how human agents remain sensitive to the particular demands of a given situation while always remaining poised to shift behaviours when the context changes (Bruineberg et al., 2021).Moreover, metastability has been posited as a necessary condition for long-term subjective wellbeing, or flourishing, in human agents (Miller et al., 2022).Metastability, it is argued, allows a system a certain degree of balance when it comes to pragmatic and epistemic action policies, which in turn underpins an ability to manage concerns over multiple timescales.
Recall that multistability, on the other hand, describes a system characterised by multiple sets of concurrently stable attractors, that is able to "lock in to one of several available patterns" and to return to a stable set after some perturbation (Kelso, 2012).The difference between metastability and multistability, however, is subtle, and within the context of an agent in the environment, both multistability and metastability are indicators of likely good overall health.In a scenario where an initially healthy individual faces a significant life event, the loss of a loved one, for example, metastability involves stabilizing within certain boundaries but potentially spontaneously transitioning to other stable states with specific conditions.In this case, the person experiences the disturbance and initially feels the emotions associated with profound loss.Over time, they "naturally" transition away from that state to a stable alternative, possibly returning to their previous healthy state or finding a new equilibrium.Multistability would characterise B. White and I. Hipólito experiencing a coexistence of different emotional states simultaneously.The individual may feel moments of joy, contentment, or normalcy alongside their feelings of sadness, grief, or shock.This may be the result of multiple available attractors, such as counselling, good friends to lean on, time alone listening to music, etc. Responding through multistable dynamics requires navigating a complex mix of emotions, potentially with the help of external support or therapeutic interventions.Roughly speaking, multistability as a healthy response can be characterised as being exhibited on potentially shorter timescales; it's about having multiple attractors during a specific timeframe or event, rather than an overall tendency to shift over time.Neither metastability nor multistability is inherently healthier or more desirable on its own.The appropriateness of the response depends on the individual's unique circumstances, the nature of the disturbance, and their capacity to adapt and recover.The goal in both cases is to facilitate a transition towards a stable and healthy emotional state, whether resembling their predisturbance state or finding a new equilibrium.Ultimately, assessing a healthier response considers the individual's overall well-being, resilience, and ability to adapt in the face of significant disturbances, considering both short-term and long-term factors.
In the state space of all possibilities of states a system can be at, a system takes a specific trajectory (which can be seen as a moment-tomoment history of states the system has visited or occupied.The trajectory of an adaptive system will be externally disturbed (Liu & Liu, 2012;Schoon & Cox, 2012;Johnson & Miyanishi, 2021), i.e. disturbed by the environmental states (Fig. 1).In its trajectory the system will be disturbed by external conditions, which will trigger the system to adapt given the specific arrangement of the dynamic geometry mapping or contextualising its present state.The present state is contextualised in a network of attractors and repellers that is idiosyncratic to and navigated by the system.The ways in which the system adjusts to environmental disturbances varies according to the present dynamic geometry and can even trigger a change in the topology of the dynamic geometry, for example by the emergence of a new attractor point or reinforcement or weakening of an existing attractor point (bifurcations).
When a system is disturbed, three main behavioural possibilities arise: (1) Multistability: the system is stable to flexibly adjust to stressors.This is also known in psychology as the capacity to "bounce back", which in CST is described as the fixed point "pulling back".(2) Bifurcations: when there is a qualitative or quantitative change of the dynamic geometry: the trajectory as a dynamical system changes as a function of an external parameter, the nature of its long-time limiting behaviour in terms of fixpoints or limit cycles.In short, when the environmental disturbance triggers the emergence of novel fixed points or qualitatively alters the strength of a fixed point.(3) Stuck state: the system develops cycle attractors, e.g., becoming stuck in periodic shifts between being melancholic and euphoric -such as occurs in bipolar disorder (for illustration of these states as attractors, see Fig. 2); or a fixed point such as in, for example, clinical depression, starts exerting a stronger attraction influence.According to the dynamical rule, the more the system visits an attractor the more stable the attractor will become (the more pull influence it will have).A too stable attractor can trigger mental health issues, as it antagonises the desirable multistability (1). 6hese overly rigid stuck states can, from a psychological perspective, be described as a narrower form of being, i.e., reduced freedom and diversity in action, thinking, or goals.This characterisation of psychopathology has clear conceptual links with work on depression and addiction that draws on the idea of agents with these conditions seeking the perceived safety of a "dark room"."The "dark room problem" (Friston, Thornton, & Clark, 2012;Baltieri & Buckley, 2019) is presented as a challenge to the Free Energy Principle.Under the Free Energy Principle, agents, it is argued, should find the simplest and most predictable state where prediction error can be minimised, i.e., a metaphorical 'dark room' with minimal input to predict, and yet, typically this is not a strategy we adopt.Various answers have successfully been given to the dark room problem, but the notion of a dark room as a hyper-avoidant state is used in the context of psychopathology.For example, the Free Energy Principle has seen accounts of depression wherein symptoms of the condition have been characterised as a 'dark room'.On these accounts, the social withdrawal and general narrowing of possibilities that is characteristic of depression are presented as an adaptive strategy in the face of a perceived inability to reduce prediction errors effectively (Friston et al., 2012;Sims, 2017; Badcock, Davey, Fig. 1.According to the Free Energy Principle, organisms and environment are coupled.The organism corresponds to internal states (μ) and its local environment corresponds to external states (η).Internal and external states are mathematically conditionally independent from one another.This means that the coupling between them depends upon another set of states, called blanket states.Blanket states comprise the sensory states (s), which correspond to the environmental states and active states (a) which correspond to states of the system at this present time.Adaptability can then be measured by the balanced exchange within blanket states, i.e. between sensory and active states.In short, for the purposes of this paper, how internal states (μ) are disturbed by the environmental states (s), via the direct influence between sensory states (s) and (a).Nested markov blankets can demarcate the reciprocal direct influence continuous in time.While the graph here is made for intuitive illustration, the full formalisms can be found in Friston (2013Friston ( , 2019)).
This construction is compatible with the Free Energy Principle 7 formalisms (Friston, 2013;2019;Ramstead, Friston, & Hipólito, 2020;Parr, Pezzulo, & Friston, 2022;Hesp, & Hipólito, 2022), in the sense that the system aims to avoid uncertainty seen as a threat to its survival. 8 Uncertainty here refers to an adaptive system's ability to anticipate its future sensory states and thus infer the causes of those states.Too much uncertainty will see a system unable to adapt to a changing environment effectively.Cognitive behaviour, and its neural dynamics, is multiscale Fig. 2. The figure depicts two distinct emotional response pathways to significant life events or disturbances: Metastability and Multistability.Each pathway represents a different pattern of emotional adaptation and stabilization.Metastability Response: This pathway entails stability within specific boundaries, with the potential for transition (i.e., bifurcation).The system acquires new strategies to readjust to the environment, which may lead to positive adaptation.However, it's important to note that the system may also develop short-term coping strategies that can be harmful in various ways (e.g., addiction).Multistability Response: This pathway tends towards the stabilization of emotional states.In the absence of disruption, the system may persist in a Stuck State reminiscent of a minimal topology of an attractor point (e.g., depressive state, trauma, addiction, etc.).Fig. 3. A. depicts a system's state (represented in red) whose trajectory has developed into a pathological, "stuck state", represented by the cyclic topology of fixed points ([C-D-E-G]).The system is stuck in a deep valley (the deeper it is, the harder it is to go away from it), also known as a local minimum point, which while affording minimal uncertainty, also offers no potential for further development (cyclic topology), for example by developing into the novel fixed points (multistability).B. represents how psychotherapeutic or coaching intervention in developmental dynamics (represented in the black thunder) act as destabiliser or a disturbance to "push" the system out of the "stuck state" into flattening the landscape (e.g., move from deep valleys to shallow alleys, increasing the probability of the system's state to move between attractors (C, D, E, or G) and hence promoting more flexible, adaptive, multistable dynamics (multistability). 7The free energy principle is a framework for understanding the brain and behaviour suggesting that cognitive behaviour and neural activity is constantly trying to reduce its free energy, which is a measure of the uncertainty or randomness in the brain's sensory inputs.The brain does this by predicting what will happen in the environment based on its past experiences, and then using that prediction to guide behaviour. 8We follow Thomas Parr and Karl Friston in distinguishing between "uncertainty concerning the temporal evolution of environmental states, and uncertainty about the mapping from (hidden) states of the world to sensory observations" (Parr and Friston, 2017. Pp. 1).In other words, uncertainty around state transitions and uncertainty around sensory states.Systems that adhere to the FEP will reduce both forms of uncertainty, but not all uncertainty is threatening to survival: navigating temporary uncertainty to minimise expected free energy over time may be a part of any epistemically valuable action, and while under some predictive processing theories that have been applied to thinking about the FEP there is an imperative to minimise all uncertainty, not all uncertainty is given equal weighting within the system (see., e.g.Clark, 2013).It should also be noted here that we adopt the same non-realist approach to construing the FEP as we do the mathematics of CST (see, e.g., Andrews, 2021).In other words, we do not construe these frameworks as making (potentially contradictory) ontological claims about the theoretical entities they posit.While attractor states under CST can be viewed as behaviours that minimise a system's free energy, the uncertainty here only applies to the FEP, not to the mathematics of CST.We take these frameworks to be explanatorily complementary, rather than ontologically distinct.We thank an anonymous reviewer for raising these important considerations.
interaction with the environment.Mathematically, the free energy system can be.The free energy principle is a framework for understanding the brain and behaviour suggesting that cognitive behaviour and neural activity is constantly trying to reduce its free energy, which is a measure of the uncertainty or randomness in the brain's sensory inputs.The brain does this by predicting what will happen in the environment based on its past experiences, and then using that prediction to guide behaviour.We follow Thomas Parr and Karl Friston in distinguishing between "uncertainty concerning the temporal evolution of environmental states, and uncertainty about the mapping from (hidden) states of the world to sensory observations" (Parr & Friston, 2017. Pp. 1).In other words, uncertainty around state transitions and uncertainty around sensory states.Systems that adhere to the FEP will reduce both forms of uncertainty, but not all uncertainty is threatening to survival: navigating temporary uncertainty to minimise expected free energy over time may be a part of any epistemically valuable action, and while under some predictive processing theories that have been applied to thinking about the FEP there is an imperative to minimise all uncertainty, not all uncertainty is given equal weighting within the system (see., e.g.Clark, 2013).It should also be noted here that we adopt the same non-realist approach to construing the FEP as we do the mathematics of CST (see, e.g., Andrews, 2021).In other words, we do not construe these frameworks as making (potentially contradictory) ontological claims about the theoretical entities they posit.While attractor states under CST can be viewed as behaviours that minimise a system's free energy, the uncertainty here only applies to the FEP, not to the mathematics of CST.We take these frameworks to be explanatorily complementary, rather than ontologically distinct.We thank an anonymous reviewer for raising these important considerations.represented as the difference between the entropy of the system (a measure of the uncertainty or randomness in the system) and the surprise9 or the unexpectedness of the environment. 7In statistics, surprisal is a measure of the unexpectedness or surprise of an event.
Depending on the specific trajectory within an individual dynamic geometry, destabilisationsalso known as perturbations or disruptions can have a positive or a negative effect.A negative destabiliser occurs when the system's trajectory, after the destabilisation, tends towards a bifurcation.A bifurcation is not necessarily bad, but if that bifurcation underpins a particular suboptimal behavioural strategy in the agent, such as a coping mechanism that is avoidant of important challenges, e. g., a personal difficulty or professional demand, then it likely becomes an antagonist to the system's overall multistability with the potential to develop ulterior mental health issues.Another negative effect is if the destabilisation triggers the development of a stuck state.A positive destabiliser is the disturbance or destabilisation, deliberately applied by psychotherapy in cases of trying to support general mental health (Hayes & Andrews, 2020;Olthof et al., 2020).We arrive at a crucial point: according to work in CST the role of psychotherapeutic intervention is to guide an individual out of a stuck state, and into multistability.For example, if a system is in a stuck state, a psychotherapeutic intervention should aim at destabilising the system (creating a degree of uncertainty or 'noise') enough such as to push it out of the stuck state, and ideally, back to multistability.The therapeutic effect is to disturb the dynamics just enough to change unhealthy patterns of thinking into healthy patterns of thinking, known as theory of change (de Felice, 2014;Rusk, Vella-Brodrick, & Waters, 2018), and it can include more traditional methods, ranging from talk therapy to psychedelic medicine (Hipólito et al., 2022).In what follows we explore the idea that ASEs could serve this kind of role.
Healthy adjustment can be formalised as a trajectory remaining within the critical thresholds of a system's developmental trajectory.The notion of a developmental trajectory implies that systems are dynamic.While complex systems are unpredictable, past events influence how and in what direction systems change (in systems terminology, this is known as path dependency) (Zazueta, Le, & Bahramalian, 2021).An adaptive system has a developmental trajectory, which refers to the progression of a given behaviour in an individual's psychological life: agency is diachronic, unfolds over time, and comes in degrees (Maiese, 2022).The trajectory can be explained and formalised through CST.Ideally the trajectory tends towards keeping a healthy state: sustainable states respond to the stressors in the environment by flexibly switching between patterns and thereby adjust and attune to the environmental states.A system's developmental trajectories are relevant because the system is not static in its dealing with environmental stressors.A multistable developmental trajectory -characterised by flexibility -yields the sustainable development of goals and projects, which require fundamental changes in the ways the system is coupled with its environment: it requires the system to enact the environment (Gallagher, 2017(Gallagher, , 2020;;Hutto & Myin, 2017;Varela, Thompson, & Rosch, 2017), as opposed to staying put, avoiding its stressors in a "dark room", i.e. avoiding the uncertainty associated with environmental surprises, demands, and stressors.If systems, in their developmental trajectory do tend to a "dark room", then it is likely that they are tending toward a stuck state, and a general mental health condition such as depression (Fabry, 2020).A developmental trajectory is a chaotic complex system because it includes the reciprocal influences between a continuously changing environmental setting and a system with long-term goals (Brondizio, Ostrom, & Young, 2009;Selomane, Reyers, Biggs, & Hamann, 2019;Zazueta et al., 2021) -which we call here dynamic psychological geometry.The developmental trajectory is, in short, an idiosyncratic "mental fitness" journey.Here, we use the term "mental fitness" to refer to the way in which an individual's psychological wellbeing is partly predicated on these healthy multistable dynamics; "fitness" here just is the capacity to flexibly adapt to changes in the environment and to enact a healthy trajectory through action.
While under ideal circumstances, a healthy developmental trajectory is sustained by existing socio-cultural scaffolding (such as a period of support and schooling through childhood and the ongoing influence of social interaction and societal institutions), it can also be aided by specialised developmental practitioners, who aim toward optimising (mental) health development (Folke, Biggs, Norström, Reyers, & Rockström, 2016;Halfon, Russ, & Schor, 2022).Life coach practitioners and counsellors, for example, aim to support individuals by focusing on overarching thought patterns and how they relate to behaviour.This might include a special focus on a specific domain, such as career or one's personal life and relationships.In the context of coaching, this can include improving certain individual psychological features or cultivating stronger optimal habits.Cognitive behavioural therapy is a form of intervention that targets the metacognitive processes -by prompting sustained reflection -that underpin our patterns of behaviour (Grant, Townend, Mulhern, & Short, 2010).The vast majority of people however, deal with their mental fitness in the attainment of personal and professional goals without the assistance of professional developmental practitioners.However, for many people, it is during periods of coping with external stressors without adequate external support that instances of poor mental health can emerge (Lockhart, Sawa, & Niwa, 2018;Sachs-Ericsson, Rushing, Stanley, & Sheffler, 2016).
While many institutions and much research make it clear that preventive mental health strategies and an individualised therapeutic strategy are the best approach for health, it is yet far from a reality (Arango et al., 2018;Colizzi, Lasalvia, & Ruggeri, 2020;Lyons-Ruth, Wolfe, & Lyubchik, 2000;McDaid, Park, & Wahlbeck, 2019;Ogden & Hagen, 2018).Specialised resources are not an attainable reality for most of us.In the next section we focus on how neurotechnology can be implemented to predicton an individual, personalised trajectorywhen a preventive intervention (i.e., destabilisation) should take place.

Wearable technology and optimal windows of intervention for mental fitness
CST implements a multiscale network of optimal windows of intervention to predict shifts in symptom structures for good symptomatic outcomes across a variety of clinical general mental health conditions (Fisher, Medaglia, & Jeronimus, 2018;Helmich, Snippe, Kunkels, Riese, Smit, & Wichers, 2020;Wichers, Schreuder, Goekoop, & Groen, 2019).We suggest that the same complex model strategies (Kuranova et al., 2020;Liu et al., 2022) can be implemented using wearable technology for the maintenance of mental fitness, i.e., for preventive mental health and general well-being.Specifically, information about optimal windows may be communicated to the ASE so its role as meta-affordance can be utilised to ensure those windows are capitalised on.Nonlinear and discontinuous changes and variability occur in daily life, which are expressed neurobiologically.For example, if one spots a spider in a room, it is likely that one's heart rate will increase.Neurobiological fluctuations can be used as predictors of symptom reduction and the development of healthier patterns of functioning, not only generally but also in the carrying out of a specific task.A therapeutic strategy called Grid-Wave or state space grids (Hollenstein, 2013) allows depicting dynamics across development by quantifying the dispersion or movement of variables in a phase space (representation of all possible states of a system).Increased variability with cognitive and behavioural components of a given behavioural pattern predicted improvement in.10 a healthier functioning after cognitive therapy for personality disorders (Hayes & Yasinski, 2015).Similarly, more variability in maladaptive parent-child interactions predicted more symptom and behavioural improvement in aggressive children (Lichtwarck-Aschoff & van Rooij, 2019).Nonlinear and discontinuous changes and periods of rising variability in symptoms and patterns of pathology occur in psychotherapy, and these fluctuations predict symptom reduction and the development of more healthy patterns of functioning (Yasinski, 2016).In short, increases in variability and noise in the manifestation of symptoms, behaviour, and underlying physiological markers seems to suggest optimal windows for intervention in terms of affecting a state transition.
Ambient Smart Environments (ASEs) can be implemented to potentiate and support long-term patterns of multistability (be restructuring a field of affordances) and thereby assist a user to avoid falling into a deterioration of well-being (i.e., stuck state).We understand that optimal behaviour is a rather vague term, but roughly speaking we use it to refer to the kind of multistable dynamics outlined earlier, using CST.Optimal behaviour is behaviour exhibiting adaptability and flexibility in the face of external perturbation or stressors.In other words, "optimality refers to…successfully managing the volatility of our environments over the long term" (Miller et al., 2022, pp.10).This flexibility and adaptability are optimal because they support the kind of allostatic control mentioned above, thereby avoiding many of the unfolding health complications associated with prolonged allostatic load.Flexible multistable dynamics are what allows a system to attend to needs stretching out across multiple timescales and anticipate rather than respond to relevant changes in the environment.However, understanding optimal behaviour in terms of multi-stable dynamics and good allostatic control is underdetermined in relation to often-used normatively thick concepts like "life meaning", "life satisfaction" or "personal accomplishment", and the multi-stable configuration that underpins one person's meaningful life might seem devoid of meaning to another.In accordance with work by Miller et al. (2022), we take this kind of long-term adaptability as a likely necessary condition for the more normatively thick concepts like "flourishing" and "meaningfulness", although it is not a sufficient condition. 9 Wearable technologies or "wearables" are part of such distributed network ASEs that can be implemented to work quietly and constantly learn, monitor, predict and adapt to the needs of the occupant.Wearables (e.g.Wearsmart rings, smart belts, gaming armbands, smart shoes, smart clothing, etc.) have been developed and used for quite some time to detect, analyse, and transmit information such as vital patterns relating to and aiming for physical health, such as heartbeat and rhythm, quality of sleep, total steps in a day, step counts, calories burned, oxygen levels, direction of your movement, e.g.steps, floors climbed, total minutes of fitness activity, etc.This kind of data can form a vital component in terms of an ASE acting as an effective and beneficial metaaffordance.Indeed, the suggestion that wearables can combine with other forms of technology to offer adaptive11 interventions isn't without precedent.A recent review of how technology might be used to support people with autism has suggested that this is one of the most promising avenues for future research (Sharmin et al., 2018).
We propose that wearables can be implemented as part of an ASE, for the improvement of a developmental trajectory, which, in the previous section, we detailed as a personalised mental fitness journey.The wearable technology, connected to a mental fitness-specific app can be implemented as a professional and individualised life coach.The data collected from the wearables run by the mental fitness-specific app can be used for the ASE to learn, monitor, predict and adapt the environment to the needs of the individual occupant as the occupant navigates the personal and professional challenges of daily life.Wearables will be capable of detecting the kind of physiological variability that can signify optimal windows for intervention.A low-stakes example might be the common challenges that relate to what might be included under the broad label of "procrastination", which is measurable in terms of individual goals in relation to performance metrics, benchmark performance, project or task management, tracking completion and skills analysis, etc. (see e.g., Fernández-del-Río, Koopmans, Ramos-Villagrasa, & Barrada, 2019).As Heath and Anderson (2010) note, one interesting feature of procrastination is that we typically are aware of the fact that it's happening, and as such, internalist strategies are unlikely to be as effective as externalising a degree of control in scaffolding and workarounds.Wearables connected to a mental fitness app can monitor data and learn to then predict when an agent is likely to engage in the kinds of avoidant, procrastination behaviours that are then associated with stress, anxiety, and reduced wellbeing.In this way, ASEs can take an active lead in the tactical disruption of unhealthy attractor states, that threaten to become stuck states.Recall the role of an ASE to act as a meta-level affordance, restructuring information within the environment to shape how opportunities for action appear to the agent.These tactical destabilisations could include introducing friction into access routes to avoidant behaviours (many people already install specific content blockers into their browsers, for example, that make it very difficult to access certain websites during certain times).
A key component of the picture of ASEs that we've so far advanced, is that these systems can take an active lead in disrupting attractor states that threaten to become stuck states (which would increase the risk of a deterioration to mental health and wellbeing).By disrupting attractor states that are exerting too great an influence and manifesting suboptimal behavioural routines, the healthy developmental trajectory of a system can be maintained.Ultimately, the goal is for the ASE to offer tactical destabilisation such that multistable dynamics are maintained, and stuck states are prevented.In practical terms, one of the main methods by which an ASE could achieve destabilisation is to temporarily render an agent's environment less predictable, injecting some degree of uncertainty to elicit epistemic or exploratory behaviours, as opposed to relying on an engrained routine or habit (think back to the exciting smart fridge mentioned earlier, with its ability to take a lead in suggesting new recipes and ordering in surprising ingredients).In practice, this would mean making changes to the agent's field of affordances such that the agent is more likely to engage in actions that move them toward a tipping point.To borrow terms from Heath and Anderson (2010), the materiality of the environment can be restructured to create 'chutes' or 'ladders' that make it more or less easy to complete a certain task that might be very alluring or aversive.Their example of a chute is placing one's running shoes by the door the night before an early run is planned, but it's easy to imagine the ways an ASE could implement little chutes throughout the day.Conversely, the system could also introduce 'ladders.'For example, if it learns that we're more likely to procrastinate using certain apps, it may temporarily restructure our desktop to make them harder to access.This emphasis on exploratory behaviour and uncertainty places these kinds of intervention in direct opposition to recent accounts of the paradigmatic case of a suboptimal behavioural routine: addiction.According to these accounts, addiction is, essentially, a pathological reliance on deeply entrenched strategies for avoiding uncertainty (Miller et al., 2020).On this account, addicts come to enact a world in which affordances have been wholly restructured through a self-organising process to reflect the agent's emerging desires.By influencing how affordances appear to the agent, systems like ASEs can intervene directly on the formation of these habitual behaviours.To be clear, we are not suggesting that ASEs are a cure for anything like fullblown drug addiction.We are suggesting however, that if we understand various forms of psychopathology (including addiction and depression) from the standpoint CST and of agent-environment systems, then the ability of ASEs to render the environment more or less destabilising becomes an obvious strategy for preventing suboptimal routines becoming entrenched in the first place.While we see these kinds of destabilisation as the primary strength of ASEs for therapeutic use, it's also worth noticing that ASEs would be capable of the opposite form of intervention; decreasing uncertainty in the environment, and thereby providing an increase in stability for individuals whose developmental trajectory has become chaotic.
While wearables have been until today mostly developed for the purposes of optimal windows of intervention for physical fitness, in this section we have suggested that we can unlock its power for mental fitness.The connected mental-fitness app, equipped with a complex systems state space grid, and by gathering real-world, networked multidimensional information about ecology, biology, society, and infrastructure, will be able to predict upcoming shifts for good symptomatic outcomes or positive behavioural change.i.e., when a preventive intervention (destabilisation) should take place, i.e. predictors of symptom reduction and the development of healthier patterns of functioning for the attaining of long-term personal and professional goals and, thereby, overall wellbeing.ASEs such as multiscale networked wearables can be implemented to directly support efficient allostatic control, which is essentially the avoidance of prolonged states of stress through optimal anticipatory action.

Trust as active inference
In the previous section, we highlighted the power that ASEs have to disrupt suboptimal routines through strategic increases in uncertainty.In other words, ASEs will have the capacity to, at times, make our lives temporarily more difficult; they will serve to try to stop us doing something that, in that moment, we might want to do.This raises the spectre of a potential issue: designing technology that acts to disrupt our ability to minimise uncertainty the way we want could be in tension with designing technology that we can trust.This issue of trust in systems that can act with a degree of autonomy to disrupt and destabilise the behavioural patterns of a human agent is a morally significant avenue for discussion and warrants a more complete treatment in a dedicated paper.Nevertheless, here we think some closing remarks on the matter should be made to highlight our awareness of the issue and to act as a grounding for future discussions.
Being able to trust technology has been described as a primary desideratum of design and implementation, as a lack of trust can be detrimental to both productivity and safety (Deley & Dubois, 2020;Lee, 2008).In most treatments, the formation and maintenance of trust is usually predicated on a set of perceived qualities or characteristics in the person or technology to be trusted (or not), and then cashed out in functional terms.
Simply put, to say that a user trusts a technology is to say that that user believes the technology to have desirable attributes such that it can be relied upon in certain contexts (Mcknight, 2005;Cabiddu, Patriotta, & Allen, 2022).These kinds of approaches can be roughly categorised as "externalist", due to their focus on the reliability that is a function of some set of externally realised characteristics.
This kind of externalist conception of trust between humans and technology has recently been formalised under Active Inference, which is a framework for understanding perception and action as uncertainty minimization (Schoeller, Miller, Salomon, & Friston, 2021).According to Active Inference, brains instantiate a statistical model of themselves and their environment, known as a generative model (Friston, 2010;Howhy, 2013;Clark, 2016;Parr et al., 2022).This model encodes Bayesian beliefs (probability distributions) over perception and action and generates predictions about what state the agent is likely to find itself in, with a primary imperative to minimise prediction errors.Under Active Inference, prediction error is equated with expected entropy of a given state.Therefore, an agent's adaptive behaviour can be explained as the minimisation of prediction errors, i.e., entropy of its own states, thus resisting the second law of thermodynamics and staying alive in environments that are capricious, and potentially volatile or hostile.Under Active Inference, both perception and action function together in service of adjustment and adaptation to the continuously changing (and challenging) environment (which is to say, the survival of the organism).To that end, under the Active Inference models an agent's model-based reasoning involves good predictive control, and perceptual inference serves to update distributions (expectations) encoded in an agent's generative model, making the model a better fit and more apt for effective action.Actions on the other hand, can be said to "update" the worldwhere a prediction results in prediction errors, this drives an agent to act on the world to make it better fit the prediction and, conversely, to their adaptation.An agent has, therefore, two uncertainty minimising strategies: passively updating their beliefs to better account for the incoming data, or by acting on the world to make it better fit the model.For example, If I expect to walk to the bus stop and catch the 10:05 to campus but see the bus approaching when I'm still some distance away, I could just update my expectation to reflect the world and wait to catch the 10:15.Alternatively, I could run, and act to ensure my original expectations are met.
According to Schoeller et al. (2021), trust between humans and machines is best understood in the language of uncertainty, control, and transparency.In extended action (actions undertaken that are empowered by technology), trust can be cashed out in terms of the confidence an agent can have that those extended actions will reliably bring about expected gradients of prediction error minimization.This has obvious similarities with the ways that notions of 'trust' have featured in discussions around the thesis of Extended Mind (ExMind).
ExMind is a radical thesis that states that features of an external environment can properly constitute part of an agent's cognitive machinery (Clark & Chalmers, 1998).There has been much discussion of what kinds of criteria might dictate whether or not an external resource can count as part of cognition, with one front runner being the high degree of trust that an agent should have in that external resource (Clark & Chalmers, 1998;Clark, 2010).On this account, 'trust' has been characterised as a question about how automatically we endorse the contents or function of a particular external tool.This is echoed in the account by Schoeller and Colleagues: they write that, "at its most B. White and I. Hipólito straightforward, trust is a measure of the confidence that we place in something behaving in beneficial ways that we can highly predict" (Schoeller et al., 2021).For both ExMind and the account by Schoeller and Colleagues any suspicion on the user's part that engagement of the system in question will bring about some unpredicted or undesirable state directly precludes trust.In typical cases, agents have a high degree of confidence that an action undertaken with their own body will bring about an expected state.For example, if I reach, using my own arm and hand, for the coffee cup in front of me on the desk and raise it to my mouth, the prediction that the result will be my drinking the coffee is assigned a high degree of confidence.On this Active Inference account of trust, the degree to which I trust a robotic arm to do the reaching and grabbing and bringing to the mouth for me can simply be understood in terms of how much confidence I have that it will achieve the same result as when I use my own arm.
However, purely externalist accounts of trust have received criticism.Work by Kiran and Verbeek (2010) argues that such accounts that focus on specific features that the technology has are insufficient to give a whole account of what it means to trust a technology.For them, trust between humans and technology is a far more complex issue than one simply about perceived reliability.They argue for an 'internalist' component that does justice to the way that trusting ourselves to a technology can reconstitute human subjectivity in profound ways (Kiran & Verbeek, 2010).Trusting a technology on this account is about recognising the way that technology provides a certain amount of freedom in choosing how we let it restructure our experience of the world, and about being able to apprehend its mediating influences, such that we're able to freely invite some while rejecting others.
In the externalist accounts of trust outlined above it's easy to see that an ASE that deliberately induced (high levels of) uncertainty could be problematic.Any account of ASEs that argues for their primary benefit being this kind of destabilisation must navigate this apparent tension and provide an account of how human agents can trust such systems.In brief, two intuitive answers could be explored.The first is to simply enforce constraints over the range of possible outcomes.Specifically, while the system may act to destabilise stuck states by inducing surprise associated with immediately available action, the broader spectrum of possible actions will itself be constrained according to some specified rule.For example, consider again the smart fridge that monitors eating and shopping habits, and orders in food and suggests challenging and healthy recipes.While the interventions of this system might serve to keep us interested and motivated to eat healthily and be creative and engaged in the kitchen, we'd want to know for sure that the system will never try to serve us an arsenic-laced quiche.One way to think about this is to think about the level of granularity at which the interventions of the system need to be predictable in order for us to trust it; perhaps we'd want a system that is predictable from a coarse-grained perspective, but unpredictable from a more fine-grained perspective.The second way is to establish a commensurability between destabilising uncertainty and trust is to think about the way in which control can be nested.This simply means that while the system has some slack to disrupt, just how much slack it has is controlled by the human agent.Notably, systems may be customisable and adjustable, and the extent to which they can be unpredictable may be dialled in by the agent until that degree of unpredictableness is itself predictable.This is rather like thinking about the trust a human has in their dog such that they are willing to let it off the leash.In short, we think that the answer to this ostensible problem is to see that trust emerges from predictability and endorsement across multiple levels of analysis and within nested structures of control.
However, we also agree with Kiran and Verbeek (2010) in their assertion that a purely externalist account of trust that is rooted solely in reliability will, ultimately, fall short.What is clear is that work needs to be done to synthesise the existing accounts of trust in the literature and see how they can be specifically applied to a technology as unique as an ASE.

Conclusion
Smart technology will come to be more prevalent and will come to play a more significant and consequential role in the lives of many of us.This will force us to increasingly think seriously about the constellation of questions raised by the mind-technology problem.In order to answer these questions in the best way possible, it's important for theorists to bring to bear the best frameworks available to capture the full range of potential that these systems have.However, this work also raises obvious practical questions around how such systems can begin to interpret something as messy and context sensitive as human behaviour.Moreover, there are technical and practical challenges in realising the full potential of wearable technologies of the kind described throughout this paper.For instance, addressing how physiological data can be effectively translated into a workable schema to inform apt interventions by the ASE.One of the goals of this paper was to shine a light on these challenges by bringing into sharper relief the great potential ASEs have.ASEs present a novel challenge in thinking about humantechnology interaction, in large part due to their role as metaaffordance.But this novel challenge also represents a new frontier in terms of imagining the kind of profound benefit such technology could have.