Extending the Predictive Mind

ABSTRACT How do intelligent agents spawn and exploit integrated processing regimes spanning brain, body, and world? The answer may lie in the ability of the biological brain to select actions and policies in the light of counterfactual predictions—predictions about what kinds of futures will result if such-and-such actions are launched. Appeals to the minimization of ‘counterfactual prediction errors’ (the ones that would result under various scenarios) already play a leading role in attempts to apply the basic toolkit of the neurocomputational theory known as ‘predictive processing’ to higher cognitive functions such as policy selection and planning. In this paper, I show that this also leads naturally to the discovery and use of extended processing regimes defined across heterogeneous mixtures of biological and non-biological resources. This solves a long-standing puzzle concerning the ‘recruitment’ of the right non-neural processing resources at the right time. It reveals how (and why) human brains spawn and maintain extended human minds.

1 Introduction Clark and Chalmers [1998] argued that mind is not "all in the head", and that under some conditions outward loops that involve quite mundane events (such as looking at things we've written down in notebooks) could count as proper parts of the machinery of individual thinking.It doesn't matter, the authors claimed, whether good information is stored in long-term bio-memory or requires bodily actions initiating the use of various tools and technologies (such as scribbling in notebooks or accessing smartphones).Human minds, considered as cognitive engines, can be built (and repeatedly rebuilt) from heterogeneous arrays of internal and external resources.
In the original paper, Clark and Chalmers introduced the example of a mildly memory-impaired agent, Otto.Otto had increasingly relied on a notebook that he always carried with him, using it to retrieve information about important facts such as the address of MoMA, his favourite museum.Entries in the notebook act as a kind of cognitive scaffolding-one that is accessed and deployed by means of sensorimotor (action) loops that unearth the right information at the right time to serve Otto's goals.These loops look much like ordinary motor loops, but they are special, Clark and Chalmers argued, in that they enable notebook entries to help realize some of Otto's standing beliefs about the world. 1ot just any form of external encoding that is accessible by a sensorimotor loop counts.Properly mind-extending loops are those (and only those) that implicate external resources in a way that satisfies additional conditions of trust, reliability, and easy access.Unlike a dusty encyclopaedia stored in the basement, the information in Otto's infamous notebook (more on which, in section 2) is easy to access as and when needed.Its deliverances are also typically endorsed and trusted [Clark 2008[Clark , 2010]].Like many of the functionalities provided by my smartphone, Otto's notebook meets those added conditions of trust, reliability, and access.Other resources fail on one or more counts and remain (like the beta release of an app) occasional tools, in need of careful deliberative oversight while in use.In between, lie a vast swathe of props and tools that, although fairly fluently looped in by well-practiced sensorimotor means, are less reliable or trusted.My car GPS currently occupies that kind of middle position.
Extended mind theorists claim that resources that meet all the additional conditions behave as true cognitive prosthetics.Linked appropriately to the biological organism, such resources augment and extend the core biological 'thinking package', much as a good prosthetic limb extends capacities for individual action.From this perspective, shrinking the mind down to the size of the brain-bound component alone looks worryingly like identifying the sporting abilities of a prosthetically enabled athlete with those of their unaided biological body.Yet the alternative option (extending the mind) has seemed inevitably revisionary, requiring us to partially re-think the meaning and scope of the concept of mind itself.
In what follows, I approach the question from a new direction, setting it in the rich explanatory context of recent work on the predictive brain [Hohwy 2013;Clark 2016].I shall assume, in line with these theories, that the neuronal component of any putative extended organization is a biological brain whose core operating principle is to minimize long-term prediction error relative.This allows us to explore the idea that such 'predictive brains' explain and make possible the looping sensorimotor encounters that build extended minds.
A helpful way to think of this is by analogy with the notion of a 'predictive bodybudget' [Barrett 2017].This is a kind of internal accounting that looks to our future bodily needs, addressing predictable shortfalls-such as inadequate blood sugar levels-well before they arise.Just as a financial budget tracks future income and expenditure, a body-budget tracks and anticipates the use and replenishment of key resources for maintaining bodily life and functioning.To renew them, we engage in familiar activities such as finding food and sleeping.
In the next section we'll see that predictive brains enable us to find and consume external sources of storage and information for the same broad reasons and using many of the same mechanisms.In this way they come to command a kind of 'predictive knowledge budget'-one that enables them to select actions and policies that will steadily secure and deploy the knowledge needed to approach our long-term goals.In building this second budget, our brains do not care whether key information is stored using biological memory or beyond, in notebooks, apps, and GPS systems.What matters is just that the right information or operations become available as and when they are needed for the fluid control of behaviour.
The result is a delicate dance in which predictive brains provide the perfect platform for the creation and deployment of extended human minds.

Acting for Information
Our lives are filled with actions, but we often think of those actions mostly in terms of their practical goals.For example, I may be trying to win at scrabble.But if we look a little closer, it becomes clear that many of my actions serve these goals in an interestingly indirect way.They serve them by increasing my state of information (for example, looking up the times of trains), or by changing the problem space itself to improve my chances of success.As an example of the latter, consider the scrabble player who physically shuffles the tiles in front of them.They do this not because this is itself a way to score points (if it was, I'd be a much better player than I am) but to prompt the biological brain with new fragments that might help recall a high-scoring word.Reshuffling XEO to EXO can prompt me to consider EXORCISE as a candidate word, and to check my hand and the board for ways to leverage an impressive 76 points.
Actions whose job is to improve or transform information rather than achieve immediate practical ends are sometimes called 'epistemic actions' [Kirsh and Maglio 1994].Eating an apple is a practical action.Searching online for the address of a nearby grocery store is an epistemic action, whose role is to gather information that (once acted upon) will reduce the gap between your current state and your goal.A vast swathe of our daily activity, from turning on the lights to see what's in the garage, to looking up movie times online, plays just this kind of epistemic, information-seeking role.Epistemic actions featured in just this role in our original argument for the extended mind, appearing as the essential lead-in [Clark and Chalmers 1998: 8] to the core claim that: Epistemic action, we suggest, demands spread of epistemic credit.If, as we confront some task, a part of the world functions as a process which, were it done in the head, we would have no hesitation in recognizing as part of the cognitive process, then that part of the world is (so we claim) part of the cognitive process.
Epistemic actions are chosen not because they are of intrinsic value to us (like eating the apple), nor even because they move us closer, physically speaking, to some practical goal.Instead, they may even move us temporarily further away.For example, I might navigate back to a familiar spot that I know full well is in entirely the wrong direction, if I happen to know a reliable route from that spot to my destination.This is sometimes called the 'coastal navigation algorithm', since a sailor may navigate to the coast in order to better find her way, even if following the coast is a much longer route [Roy and Thrun 2000].
Predictive brains, equipped with a model of how current actions will alter our own future sensory states, are naturally pre-disposed to discover both practical and epistemic actions.According to the leading 'active inference' formulation of predictive processing [Friston 2010;Friston and Rigoli et al. 2015], practical actions and epistemic actions are determined in the same way, as the predictive brain makes counterfactual predictions about what kinds of futures will result if such-and-such actions are launched.Actions are then chosen that deliver preferred outcomes directly (when possible) or else that probe and sample the environment to bring forth more information, reducing key uncertainties, and making the outcome more likely in the future. 2he upshot is a kind of 'knowledge-budgeting' whose information-theoretic target is simply the timely resolution of salient uncertainty by whatever means are known to be available.Knowledge-budgeting enables the agent to marshal resources in the present that will help minimize expected future prediction error, just as we might top-up energy supplies in the present in anticipation of future needs (like running a marathon).
The difference between humans and other animals (who likewise engage in many forms of epistemic action) then lies in the temporal depth and complexity of the world-knowledge they bring to bear-an accomplished tool-user such as an orangutan [Laumer et al. 2019] might use a stick to probe the depth of the water before attempting a hazardous river-crossing, while a human might consult a time and tide table that enables them to plan river-crossings for many months ahead.Knowledge-budgeting thus occurs at multiple time-scales and can involve both short-term probes and long-term planning.
The ability of active inference systems to find solutions of this kind has now been demonstrated in multiple simulation studies [Friston et al. 2015;Parr and Friston 2017;Tschantz et al. 2020].In one such study simulated rats used a mixture of practical and epistemic actions to find rewards in a three arm (T) maze.The rats were just small bundles of code that learnt a predictive model of outcomes as conditioned upon actions, and that 'wanted' to occupy grid positions that were designated as containing food rewards.The simulated rats started each run at the centre point of a simulated three arm (T) maze in which food rewards (preferred states) could be found at the end of one of the two arms-the right and left parts of the top of the T. The lower part contained a cue that tells them where to find the reward on each trial.Rather than directly explore each upper arm, the rats learnt to navigate by temporarily moving away from their targets to the lower location that never actually contained a reward but that always contained an informative cue.
Here, just as in the 'coastal algorithm', the simulated rats move in a direction that is away from the practical target to a place from which (thanks to the cue) an efficient route to the reward was assured.In so doing, they are acting to gain information, and thereby (if they were real rats) to find efficient ways to the food sources that maintain the 'body-budget' that is essential to life.This reminds us that trade-offs between epistemic and practical action emerge early in the history of life, even though they become more striking as world-models increase in scope and depth, and as technologies of support multiply.Now, we might use google to help find a restaurant of choice, one serving up rewards (such as unusual sashimi dishes) that are at best tenuously linked to metabolic necessities.But the dance between practical and epistemic action remains the same.
In all of these scenarios, all the brain does is select the actions that best minimize counterfactual prediction error relative to goals.By so doing, they find the set of interwoven practical and epistemic actions that are most likely to bring desired future states (being on the other side of the river, winning at Scrabble) into being. 3Epistemic and practical choices here flow simultaneously.Indeed, from this perspective, they are not very different after all.Any predictive processing agent with a time horizon will discover both epistemic and practical actions.Both strategies emerge directly from the attempt to minimize the information-theoretic quantity known (in the recent literature) as expected future prediction error [Friston et al. 2015;Parr and Friston 2017].That's an unwieldy phrase but for present purposes it just means having a capacity to look counterfactually towards the future, so as to compare what happens if we take one or another course of action.
Suppose that my goal is to see the new Spike Lee movie tonight.In these accounts, that is realized as a strong prediction that I will indeed see that very movie tonight.But to make that prediction come true, I need to improve my state of information by taking the venues and showing times into account.So right now, epistemic action prevails.Using my smartphone to reveal local showtimes, I now strongly predict that I'll attend the nearby 8pm showing.This new prediction acts as another goal and yields further epistemic activity (perhaps I consult the web) that in turn leads me to believe/predict that I will get the 7:30 bus.That prediction acts as yet another goal, one that finally enslaves apt motor action when it is time for me to leave the house.Notice that in this whole scenario, all the brain does is select the actions that best minimize expected prediction error relative to the goals.As time horizons extend, this sweeps in purely 'inner' retrievals (for example, from bio-memory), other kinds of neural operations (such as imagining the route), and epistemic actions such as consulting the web using a smartphone.Otto's use of the notebook would be mandated in just the same way, and for just the same reasons, as the use of the internet in the previous example.These uses are also continuous, we now appreciate, with the orangutan's use of the stick to gauge the depth of the water.Extended minds (sensorimotor loops that engage resources meeting the additional conditions laid out in Section 1) thus emerge when ordinary predictive biological brains minimize expected future prediction error using the best means available.
According to this picture, knowledge-budgeting activities do not always imply the presence of extended cognitive machinery.But they do so whenever the sensorimotor loops involved in those activities have become sufficiently ingrained and well-practiced, and the relevant external resources are robustly available as and when such loops require them.The orangutan's stick, on this interpretation, is as much part of their extended cognitive economy as Otto's notebook.The differences are merely ones of timescale of information-harvesting and use.By contrast, my use of the old-fashioned encyclopaedias stored in my attic is unlikely to make the grade, as that resource is not robustly available as and when it is needed-unlike the use of my smartphone to look up the cinema times.

Solving a Recruitment Puzzle
The account on offer tracks the original vision of the extended mind, locating that vision within the broader framework offered by active inference formulations of predictive processing.But the original notion of the extended mind was developed without the benefit of a solid account of what brains do, and it left dangling a 'puzzle of recruitment'-the puzzle of how just the right brain-y, bodily, and worldly stuff comes together at just the right time.In Clark [2008: 137] I describe the recruitment puzzle like this: [the extended mind story] bequeaths a brand-new set of puzzles.It invokes an ill-understood process of "recruitment" that soft-assembles a problem-solving whole from a candidate pool that may include neural storage and processing routines, perceptual and motoric routines, external storage and operations, and a variety of … cycles involving self-produced material scaffolding.
Effective recruitment is clearly the key to both fluent integration and cognitive extension.Somehow, the canny cognizer manages to activate or exploit, on the spot, whatever mix of problem-solving resources will yield the best result with the minimum of effort.When this meets the right additional conditions, the result (I claim) is an extended mind.
The answer to the recruitment puzzle is now clear.Predictive brains constantly estimate the extent to which taking an action (such as using the stick to probe for depth) will reliably reduce uncertainty in ways that help us approach our goals.The ability to make these estimations reflects the operation of a 'temporally deep predictive model'.Epistemic actions emerged naturally as part of this process, simply because apt actions in the here and now can be selected so as to serve the knowledge-budget, harvesting and deploying information that improves our chances of future success.Such actions minimize future prediction error-the error that would emerge if we failed to bring about our own goal states.
In the case of Otto, that means selecting the action of consulting the notebook (to get to MoMA, which is where Otto's brain predicts he will shortly be).In the case of another agent, it might mean retrieving the address from their highly reliable bio-memory.But each of these moves now arises in the same way and for the same deep reasons.The knowledge-budgeting activities of the predictive brain will simply factor in the availability of reliable internal and external operations and resources as they generate policies that minimize error in the pursuit of goals.In this way, the recruitment puzzle is solved by estimating which actions and policies (either inner retrieval or implemented by means of perception-action loops) best resolve key uncertainties, thus reducing the distance between our 'optimistic predictions' [Van de Cruys et al. 2020] such as the prediction that we will arrive safely at MoMa and current reality.Likewise, the agent who wanted to arrive at the Spike Lee movie on time discovered epistemic actions that retrieved information about times and transport.As this process unfolds, the epistemic possibilities provided by certain external items and resources, such as the smartphone, may gradually become woven into our daily lives in the additional ways required (again, see section 1 above) to count as true cognitive extensions.

On Conflict Resolution and Policy Selection
There are various details that now need ironing out.Assuming that your goal is fixed (get to MoMA, or see the new Spike Lee movie tonight) then systems that know about their worlds and that can select action to minimize expected future prediction error will indeed tend to discover effective routes to the goal.In such cases, the goal is encoded as a preferred future observation or (equivalently) what is sometimes known as a 'target prior'.In other words, the system strongly expects ('optimistically predicts') finding itself in the target state.Given that strong expectation, and a body of relevant world-knowledge (the structured generative model), a mixture of epistemic and practical actions will be inferred as the policy most likely to lead the system to occupy that state.Such policies will typically involve a number of sequentially-ordered movesdetermine show times, determine bus times, etc.When a move becomes actionable (it's nearly time for the bus) it will be 'cashed out' thus bringing the system steadily closer to its goal (see, for example, [Pezzulo et al. 2015]).
But what happens when there exist a number of competing goals, several of which might be progressed (at a given time) by some action or other?4In most cases, taking one of the actions will at least temporarily eliminate the possibility of taking the other.I may want to see the movie and to complete the next chapter of my new book.Installing each of these as a target prior does not, in and of itself, tell me what to do this afternoon.So how does the active inference agent choose which to pursue?How does it decide to minimize expected future error relative to one of the two goals (seeing the movie, let's say) rather than the other?Without a solution to this puzzle, the claim that active inference systems genuinely solve the recruitment puzzle, by spawning the right (extended or un-extended) processing regimes at the right time, may seem questionable.
Presenting a full solution to this puzzle-which is really a much more general puzzle about goal selection-is beyond the scope of the present paper. 5But we can begin to appreciate the general shape of the solution by adding one final element to our explanatory mix-the use of so-called 'precision-weightings'.An active inference architecture does not merely harbour a selection of preferred future observations.It also assigns, and continually updates, a weighting (known as the 'precision-weighting') both on predictions and incoming sensory evidence (see, for example, [Feldman and Friston 2010]).The role of this precision-weighting is to enable extreme context-sensitivity in the use of the information stored in the generative model, temporarily giving increased influence to some predictions over others, or favouring some aspects of the incoming sensory evidence.Learning and adjusting these precision-weightings so as to deal with changing needs and contexts is part and parcel of learning the generative model that governs embodied interactions with the world.
Applied to the case at hand, this provides the local mechanism by which one goal (predicted future state) can be temporarily positioned to take priority over anotherfor example, seeing the movie over writing material for the book.But it does not yet provide a recipe for assigning those precisions in the first place.It does not tell us why the 'seeing the movie' outcome is, for the moment, highly enough weighted to overcome the (equally actionable) 'writing the book' outcome.This is not a theoretical shortfall.Rather, it is an inevitable silence in the face of the idiosyncratic starting points and learning histories of different agents.Given some initial starting point (an individual's unique location in a high-dimensional state space of precision-weighted predictions) it is their lifetime experience that counts.That experience installs a model that encodes many preferred outcomes and adjusts local precision-weightings to ensure that multiple such outcomes can be secured.In the very broadest terms, I expect both to write the book and to enjoy the occasional movie, and the best way to satisfy both expectations (given everything else that I know) is estimated to involve seeing the movie today.
The truly determined sceptic could now reformulate their worry, asking why I have set my preferred outcomes (preferred future observations) that way rather than some other way?Why did I set out to write a book at all?But here too, we must be silent.My preferred outcomes are simply the result of initial biases plus the effects of a lifetime of acting and learning.Those embodied interactions also explain why I sometimes alter the weightings on preferred outcomes over time.For example, I might hear of a valued colleague suffering a serious illness and find that I have altered the weightings on a whole host of activities, such as seeing the movie, so as to increase my chances of enjoying them while I can.Another agent might respond quite differently, upping the precision on getting the book completed as soon as possible.But for any active inference agent equipped with a specific set of precision-weighted prior beliefs, there will be some or other re-assignment of precisions consequent upon the flow of embodied interactions with the world.
With preferred outcomes at many timescales safely in place, local actionable goals are selected, and the active inference system recruits whatever mix of internal and external operations (such as recall from bio-memory, accessing a notebook, or using a smartphone) best minimizes expected future prediction error.This provides the mechanism by which minds and mental processing can at times extend seamlessly into the world.

Mind as Brain-Redux
But perhaps this is still a little too fast.It might be worried that simply by affirming the key role of the brain in selecting and assembling the right set of resources at the right time, we have (contrary to the spirit of the extended mind hypothesis) re-selected the brain as the locus of all truly 'cognitive' effort.If the brain is indeed the primary seat of 'recruitment', doesn't that also make it the seat of mind and cognition?
This kind of worry was foreseen-and schematically resolved-in my own earlier work [Clark 2007[Clark , 2008]].The correct response, I argued (in [Clark 2008: 122] and elsewhere) is to firmly distinguish the process of 'recruitment' (the selection of the right resources at the right time) from the processing that then takes place in some extended array of resources.Within that extended array ongoing flows of information and the transformation of information are best seen as cognitive processes in their own right. 6 The situation is akin to using one tool to make another tool, or to using a boot program to start-up a computer.We use one set of cognitive processes (the brainbound ones that serve recruitment) to assemble, on-the-spot, another cognitive process-a larger problem-solving array comprising a potent mixture of biological and non-biological resources.But once it is assembled, it is that larger array that solves the problem.
6 For arguments that predictive processing and active inference here eliminate the claimed explanatory virtues of the extended mind perspective, see Howhy 2016, 2017.For opposing arguments see Clark 2016, 2017a,b, Constant et al. 2020, and Kirchoff and Kiverstein 2021.Further ways of approaching the issues (mostly from the viewpoint of 'enactivism') are explored by Bruineberg and Rietveld [2014], Gallagher and Allen [2018] and Korbak [2021].See also Roepstorff 2013, Hutchins 2014, Veissière et al. 2020and Ramstead et al. 2021.For critiques and questions concerning the scope and nature of predictive processing accounts more generally, see Orlandi and Lee 2020, Teufel and Fletcher 2020 and Fabry 2021.It is not the purpose of the present treatment to defend the explanatory virtues of treating such larger arrays as true loci of cognitive processing-to do so would simply be to revisit the entire set of arguments for the extended mind.7But it is perhaps worth noting that there is at least one way of understanding the notion of recruitment that must be avoided if we are to make room for the view that neural recruitment is compatible with extended cognition.What must be avoided, if we are to preserve space for this view, is the idea of recruitment as itself effortful and deliberative.Such deliberate effortful gathering would work against the idea that the extended regimes arise-like their internal biological counterparts-fluently and rapidly as task and context shift and alter.
For cognitive processing to extend, top-level agentive goals must instead act mostly as catalysts that cause larger organizations to arise and dissolve.Otto looks at the notebook because his top-level desire to go to MoMA seeds a rich processing cascade that is capable of fluidly selecting whatever combination of inner and outer operations best enables progress-a cascade that usually unfolds without much need for further conscious reflection or deliberation.This additional requirement was highlighted in a revealing footnote [Clark 2007: note 14] warning that talk of 'recruitment' is not meant to suggest the deliberate gathering of resources by a thoughtful agent.Instead: new problem-solving organizations emerge in conformity with some cost function (or functions) whose effect is to favor the inclusion of certain resources (be they neural, bodily, or bioexternal) and the exclusion of others.This cost function appears neutral with regard to the location or nature of the resource except in so far as such matters make some functional difference-that is, except in so far as they are apt to impact the relative cost of one assembly over another.
We can now affirm that the required location neutral cost-function is nothing other than the minimization of expected future prediction error.
Distinguishing the core machinery of recruitment from the extended regimes that result also helps ease ongoing debates concerning the relevance of so-called 'Markov blankets' (a statistical measure of conditional independence) to the debate concerning cognitive extension.For it further demonstrates that in studying cognition, there are [Clark 2017b] multiple systems of interest that may persist and come into being at many scales of space and time.Cognitively important Markov-blanketed systems may thus be multiply nested in ways that are stable at varying spatio-temporal scales.This works against the claim [Howhy 2016[Howhy , 2017] ] that the Markov blanket construct directly undermines arguments for the extended mind. 8n closing, it might also be useful to relate the present proposal to the much rougher outline of a solution presented in Clark [2017a].That work pointed to the potential role of varying 'precision-weighting' as a means of fluently selecting specific resources and strategies when confronting a task or problem.This works because precisionweighting (see previous section) provided a single local mechanism for selecting both the inner circuitry and (via action) the external resources that together constitute extended cognitive processing.This is correct, as far as it goes.But it left unresolved the deeper question of how and why specific regimes emerge when they do.That deeper puzzle is also now resolved.Specific extended processing arrays are recruited and deployed in temporally correct fashion without the need for deliberative agentive oversight, so as to best minimize expected future prediction error.As this process unfolds, the core machinery of recruitment is indeed 'brain-bound', consisting in neural estimations of the combinations of inner processing and worldly epistemic action that best minimize expected future error.But the larger processing arrays that emerge may also count as true cognitive circuitry, by meeting the additional constraints sketched in section 1 and explored more fully in Clark [2008].The emergence of these larger arrays is not itself typically a matter of agentive deliberation.An extended mind serves agentive goals (Otto wants to get to MoMA after all).But extended coalitions arise and dissolve rather fluidly and automatically, as do webs of activity arising purely within the biological brain.

Conclusions: the Continuity of Cognitive Extensions
Predictive brains that minimize expected future prediction error reliably spawn welltimed epistemic actions in the form of sensorimotor engagements that actively seek out information that will help us achieve our goals.In this way, predictive brains enable the larger problem-solving arrays characteristic of extended minds.
Not all such arrays count as extending the mind-additional constraints need to be met.But it is by minimizing expected future prediction error in this way that human brains solve the 'recruitment puzzle' highlighted in Clark [2007Clark [ , 2008]], selecting and orchestrating the right set of bio-internal and bio-external resources at the right time.The resulting mesh of practical and epistemic actions provides the best way to minimize expected future prediction error-the errors that would emerge under other policies and with different selections of local action.9Active inference agents of this stripe, automatically select embodied actions that aim at desired goals (highly predicted future observations) by resolving uncertainties and increasing their state of information.When Otto consults his notebook on the way to MoMA, expected future prediction error (the error that would dominate under different policies) plummets.
Importantly, epistemic actions here emerge in just the same way, and for just the same reasons, as do other, more immediately practical actions.Indeed, we may now appreciate the deep and perhaps unexpected continuity between simple uncertaintyminimizing perception-action loops (such as the orangutan's use of a stick to probe the depth of the water) and much more complex, distinctively human, strategies, such as using a smartphone to help us find our destination.
The common imperative to minimize expected future error provides exactly the kind of 'location-neutral cost-function' anticipated in Clark [2007], but that was until now a mere place-holder in the literature on the extended mind.This locationneutral cost-function enables human technologies to take over, transform, and augment many of the functions once carried out by our biological brains.Extended human minds are manifestations of an inner prediction-driven process that sweeps in the use of worldly resources as easily-and for exactly the same reasons-as it launches practical actions and engages different aspects of its own inner circuitry.