The unfolding argument: Why IIT and other causal structure theories cannot explain consciousness.

How can we explain consciousness? This question has become a vibrant topic of neuroscience research in recent decades. A large body of empirical results has been accumulated, and many theories have been proposed. Certain theories suggest that consciousness should be explained in terms of brain functions, such as accessing information in a global workspace, applying higher order to lower order representations, or predictive coding. These functions could be realized by a variety of patterns of brain connectivity. Other theories, such as Information Integration Theory (IIT) and Recurrent Processing Theory (RPT), identify causal structure with consciousness. For example, according to these theories, feedforward systems are never conscious, and feedback systems always are. Here, using theorems from the theory of computation, we show that causal structure theories are either false or outside the realm of science.

states and vice versa. To do so, a number of theories propose that the essential element for understanding consciousness is how parts of a system interact. If a system has the "right" kind of causal structure, in other words, if its elements interact in the "right" way, it is conscious. Otherwise, it is not. We call such theories causal structure theories. For example, in Recurrent Processing Theory (RPT), Lamme proposed that recurrent processing is both necessary and sufficient for consciousness (Lamme, 2006). The first sweep of visual feedforward processing is unconscious. Consciousness kicks in when recurrent, top-down processing interacts with neurons activated during the initial feedforward sweep (Lamme, 2006). According to RPT, what matters is the causal structure because consciousness depends only on how neurons interact with each other: when there is recurrent processing there is consciousness, and there is no consciousness otherwise. Empirical support for RPT was proposed to be provided by neurophysiological experiments (Fig. 1c) in which recurrent processing enhanced neural activity in V1 when visual stimuli were consciously perceived. When the stimuli were not consciously perceived (during anaesthesia or when the stimuli were masked), there was no recurrent processing (Fahrenfort, Scholte, & Lamme, 2007).
Information Integration Theory (IIT) is another example of a causal structure theory of consciousness. IIT proposes that an information integration measure called ϕ, which is computed based on the causal structure of a system, quantifies consciousness (Oizumi, Albantakis, & Tononi, 2014). Consciousness is identified with ϕ > 0 systems: if elements of a system interact in the "right" way, the system has ϕ > 0 and is conscious. If ϕ = 0, it is unconscious. For example, ϕ is always greater than zero in recurrent systems (they are always conscious) and always equal to zero in feedforward systems (they are never conscious). Empirical support for IIT was asserted to be provided by studies showing that a practical proxy of ϕ is low in coma, intermediate in minimally conscious states, and maximal during wakefulness (Casali et al., 2013;Tononi, Boly, Massimini, & Koch, 2016).
IIT and RPT were amongst the first theories of consciousness to make precise predictions about which systems are conscious. As such, they contributed greatly to the advancement of the science of consciousness. However, we will show that causal structure theories end up in an empirical impasse for principled reasons: they are either false or outside the realm of science.

The unfolding argument
Recurrent neural networks are universal function approximators (Fig. 2;Schäfer & Zimmermann, 2006). That is, any input-output function can be approximated to any degree of accuracy. Vision is such an input-output function. For example, pictures of animals are presented as inputs on the retina, and the outputs are the elicited percepts of animals (or reports about these percepts). Likewise, the stimuli in a visual masking experiment are inputs, and the outputs may be button presses, verbal reports or any other measure shown to reliably correlate with subjective reports. Importantly, experiments which intervene directly on the brain, for example using implanted electrodes or Transcranial Magnetic Stimulation (TMS) are still input-output functions. The only difference is that part of the input is provided by means of electrodes or TMS rather than through the sensory organs.
Feedforward neural networks are also universal function approximators ( Fig. 2 (Hornik et al., 1989;Schäfer & Zimmermann, 2006). That is, they can be used to generate any desired input-output function to any degree of accuracy using a finite number of neurons. Therefore, for any recurrent network with a given input-output behaviour, there are corresponding feedforward networks with the same characteristics (although feedforward networks often need many more neurons than their recurrent counterparts). For example, recurrent networks performing image recognition, exhibiting binocular rivalry, and processing spike trains all have feedforward equivalents (see main text). Anything that can be done by recurrent networks can also be done in a feedforward manner. a given input-output function we can find both feedforward and recurrent networks that realize the same function in different ways (LeCun, Bengio, & Hinton, 2015;Oizumi et al., 2014;Werbos, 1988). For instance, if there is a recurrent network that performs image recognition, there is an equivalent feedforward network that does it equally well. If there is a recurrent network that exhibits the characteristics of binocular rivalry, there is an equivalent feedforward network that does so too. If there is a recurrent network that takes a collection of spike trains as input and outputs another collection of spike trains, there is an equivalent feedforward network that does the same thing. Anything that can be done by recurrent networks can also be done in a feedforward manner (Fig. 2). We call this unfolding: any recurrent network can be unfolded into a feedforward network implementing the same function. In particular, any behavioural experiment can be seen as an input-output function, and can thus be implemented by both recurrent and feedforward networks. Any input-output behaviour can be implemented not only by one particular feedforward network, but also by infinitely many equivalent feedforward networks and by infinitely many equivalent recurrent networks, because the universal approximator property does not depend on structural details such as the number of layers or on the precise connectivity. In fact, given an input-output function, we can find infinitely many networks, each with a different ϕ, that all realize the same input-output function (see Appendices A and B, see also Chalmers, 2018). Moreover, the universal function approximator property is not restricted to neural networks but also holds true for Turing machines, cellular automata, cyclic tag systems, and more generally for any universal computing system (Turing, 1937;Wolfram, 2002). These facts are uncontroversial and widely accepted (including by proponents of IIT: see Oizumi et al., 2014).

Implication I: causal structure theories are doubly dissociated from empirical data
An implication of the unfolding argument is that causal structure theories are either false or outside the realm of science. In other words, causal structure theories are doubly dissociated from empirical data: they are neither necessary nor sufficient to explain empirical data (see Fig. 3). For example, according to IIT, the level of consciousness varies with ϕ. A system is conscious if, and only if, ϕ > 0 (oblique red arrows in Fig. 3). 1 An experiment can be seen as an input-output function (Fig. 2). Hence, for any recurrent system with ϕ > 0 that reproduces the outcome of an experiment, there are feedforward systems with ϕ = 0 that also reproduce the outcome. According to IIT, one system has consciousness but the other does not. Conversely, for any feedforward system with ϕ = 0, there are recurrent systems with ϕ > 0 that produce the same experimental results. In fact, we show in Appendix A how to implement any function with ϕ = 0 or with arbitrarily high ϕ. That is to say, for each system that provides evidence for IIT, there are other possible systems that falsify it.
This argument generalizes to any causal structure theory, not just IIT. All causal structure theories are doubly dissociated from empirical results about consciousness. Hence, it makes no sense to provide experimental evidence for causal structure theories. For example, the figure-ground experiment mentioned earlier cannot support or be explained by RPT because there are feedforward Fig. 3. Double dissociation. Causal structure theories aim to explain empirical data about consciousness. For example, IIT proposes that ϕ increases gradually as we go gradually from unconscious to conscious states, e.g., from sleep to drowsiness to wakefulness (vertical blue arrows). However, the unfolding argument shows that we can completely reverse the picture and, for example, implement conscious states with ϕ = 0 neural networks and unconscious states with high ϕ neural networks. Hence, ϕ is neither necessary (wakefulness can be implemented with ϕ = 0; downwards oblique red arrow) nor sufficient (sleep can be implemented with high ϕ; upwards oblique red arrow) to explain experimental results about consciousness. The same reasoning applies to RPT's figure-ground experiments (Fahrenfort et al., 2007). Therefore, causal structure is doubly dissociated from empirical data, i.e., it is neither necessary nor sufficient to account for experiments. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.) networks that make the same subjective reports as humans when they consciously see the figure, but are unconscious according to RPT. Conversely, there are networks with recurrent activity that make the same subjective reports as humans when they do not consciously perceive the figure, but are conscious according to RPT. Likewise, the finding that awake humans have higher ϕ than sleeping humans cannot be explained by or support IIT because there are feedforward networks with human wakefulness characteristics, and recurrent network with human sleep characteristics.
Our arguments are not only of an abstract mathematical nature. In real life, there are many examples where feedforward and recurrent networks realize the same complex functions. For example, deep reinforcement learning has been implemented with purely feedforward convolutional networks to achieve super-human performance in Atari video games (Mnih et al., 2013). Hausknecht and Stone (2015) replicated this superhuman performance using recurrent networks. The unfolding theorems tell us that this is not surprising because we can always find equivalent feedforward and recurrent networks. These systems are empirically identical (to a close approximation). One is conscious but the other is not, according to causal structure theories. Moreover, unfolding provides a recipe to build two small robot systems with exactly identical input-output functions but different causal structure (see Appendices A and B). Experiments on one robot support the theory; experiments on the other falsify it.
The unfolding argument shows that there are always systems that empirically falsify causal structure theories. Proponents of IIT try to avoid this problem by claiming that systems with ϕ = 0 are unconscious despite being empirically indistinguishable from conscious systems (Oizumi et al., 2014). We will show next that this claim makes IIT circular, and therefore unfalsifiable. In other words, causal structure theories are falsified by the unfolding argument, unless they decide to become unfalsifiable.
For example, proponents of IIT may still insist that the robot with ϕ = 0 is unconscious whereas the one with ϕ > 0 is conscious, owing to their differing causal structure. However, such a proposition quickly ends up in circularity because we have no criteria to settle the matter. In particular, we have no empirical criteria because experimental results about consciousness are all identical for the two robots. The only reason to believe that only the ϕ > 0 robot is conscious is to already believe in IIT, but this is circular. The situation is even worse: there are many causal structure theories, such as IIT and RPT. Which one is the "right" one? Even within IIT, the axioms do not uniquely determine ϕ (Barrett & Mediano, 2019;Bayne, 2018), and different empirical measures of ϕ yield very different results (Mediano, Seth, & Barrett, 2018). Which version of IIT is the "right" one? We can never decide because we have no criteria to test the theories and pit their predictions against each other. We are left with the conclusion that there are different types of "consciousness" (i.e., consciousness IIT_version_1 , … , consciousness IIT_version_n , consciousness RPT , and so on), depending on which theory we favour. Insisting that the robot with ϕ = 0 is unconscious whereas the one with ϕ > 0 is conscious even though they are empirically identical leads IIT outside the realm of empirical science.
To summarize the unfolding argument, the conclusion follows from four premises.

(P1):
In science we rely on physical measurements (based on subjective reports about consciousness). (P2): For any recurrent system with a given input-output function, there exist feedforward systems with the same input-output function (and vice-versa). (P3): Two systems that have identical input-output functions cannot be distinguished by any experiment that relies on a physical measurement (other than a measurement of brain activity itself or of other internal workings of the system).

(P4):
We cannot use measures of brain activity as a-priori indicators of consciousness, because the brain basis of consciousness is what we are trying to understand in the first place. (C): Therefore, EITHER causal structure theories are falsified (if they accept that unfolded, feedforward networks can be conscious), OR causal structure theories are outside the realm of scientific inquiry (if they maintain that unfolded feedforward networks are not conscious despite being empirically indistinguishable from functionally equivalent recurrent networks).

Examples
Imagine that one could surgically replace the brain's native recurrent sound processing system with an equivalent feedforward implant. The implant takes the same collection of spike trains as inputs, and outputs the same collection of spike trains as the native brain areas. We know that such implants exist in principle because of the previously mentioned unfolding theorems. Even though the causal structure in the new implant is completely different, the rest of the brain does not notice any difference. 2 The brain can do its normal job. This means that all subjective reports by the person are identical before and after the surgery. The person will claim all the same things about sound as before the implant was placed, such as "I hear the drizzle of the rain, it is music to my ears", or "I understand what you are saying", etc. In particular, any experiment about which sounds are consciously perceived will yield exactly the same results as with the native brain area. Therefore, we end up with the dilemma mentioned earlier: either causal structure theories are wrong (if they accept that there is still auditory consciousness with the implant), or they are outside the realm of science (if they claim that consciousness is different with and without the implant even though there are no empirical differences).
We can push the example further to entire brains. Since anything that can be done with a recurrent network can also be done with a feedforward network, there could be «feedforward brains» that behave exactly like human brains. Such systems would have all the same functional characteristics as a normal human brain, but completely different causal structure. They behave exactly like a human in all respects, passing the Turing test seamlessly. However, according to causal structure theories, they are not conscious because they do not have the "right" kind of causal structure.
Crucially, these systems respond to any empirical experiment exactly like humans. For example, they identically describe what it is like for them to see red, hear sounds, have memories, and so on. They respond to all scientific paradigms (such as masking, binocular rivalry, figure-ground segmentation, etc.) in exactly the same way. They exhibit the same wakefulness characteristics and the same sleep characteristics. In summary, no behavioural experiment can distinguish between human brains and feedforward brains in principle. Therefore, either causal structure theories are wrong or they are outside the realm of science.

Implication II: conscious content in IIT is doubly dissociated from experiments
In causal structure theories the content of consciousness is also doubly dissociated from empirical observations. For example, we can construct systems that behave as having experience X when, according to IIT, they are in fact experiencing Y (see Appendix C). For example, a system participating in a rivalry experiment may report that it is seeing the cat image when, according to IIT, it is experiencing the smell of ham. In principle, as shown in the appendix, it can experience any content of consciousness while reporting that it sees a cat. Of course, it could also experience seeing a cat, but this would just be a coincidence, showing a double dissociation.
There is a straightforward reason why causal structure theories are vulnerable to the kind of arguments presented here. All that is required for a system to be conscious is a particular causal structure. At the same time, any function can be implemented by many different systems with different causal structures. Hence, there can be no consistent link between causal structures and experimental results.

Network efficiency & evolutionary constraints
In practice, the brain has to cope with very strong space and energy constraints: processing must be efficient enough to be contained within a small skull, and energy consumption must be limited. Experiments have suggested that, indeed, sufficiently complex tasks can strongly constrain network properties, when the number of neurons is limited (Khaligh-Razavi & Kriegeskorte, 2014;Nayebi et al., 2018;Yamins et al., 2014). In general, feedforward networks require many more neurons to implement a function than equivalent recurrent networks with more efficient causal structure and are therefore impractical (but not always: for instance image recognition is more efficiently implemented in feedforward convolutional networks). In this regard, causal structure theories may turn out to be good markers for consciousness. For example, high ϕ has obvious functional benefits, such as efficiently integrating information. We argue that awake brains have high ϕ for this functional reason. Hence, causal structures may be good correlates for consciousness in humans not because they are identical with consciousness, but because they correlate well with neural information processing in general, which happens to covary with conscious state in humans as a contingent rule. This explains why causal structure theories may provide human consciousness-meters (see for example Casali et al., 2013). However, it is an entirely different thing to identify consciousness with causal structure. In short, brains are recurrent because brain processing must necessarily fit inside a skull, not because consciousness is identical with the brain's causal structure.

The unfolding argument vs. the zombie argument
The zombie argument is a well-known argument aiming to show that physicalist and functionalist theories cannot account for consciousness. Imagine unconscious "zombies" who are empirically indistinguishable from conscious "non-zombie" equivalents. For example, if a zombie inadvertently places its hand on a hot stove, it yells "ouch" and immediately retracts its hand, just as the nonzombie does. However, the zombie does not feel pain. The 3rd person, observable properties are all identical between zombies and non-zombies, but consciousness differs. Whether or not zombies are in fact possible is heavily debated (e.g., Dennett, 1991). However, if they are, it follows that consciousness cannot be explained in the standard, functionalist framework of science (this is the hard problem of consciousness; Chalmers, 1996). Indeed, if functionally identical systems (the unconscious zombie and its conscious nonzombie equivalent) can have different consciousness, then functional approaches cannot explain consciousness.
The unfolding argument is very different from the zombie argument for two reasons. First, the zombie argument aims to dismiss all physicalist accounts of consciousness, including functional ones. In contrast, the unfolding argument only targets causal structure theories of consciousness, and not physicalist or functionalist theories in general. In fact, the unfolding argument favours functionalist theories because (un)folding a network changes only its causal structure but not its function or physical nature. Other major theories of consciousness, such as Global Workspace Theory (GWT; Baars, 1997;Dehaene & Naccache, 2001), Higher-Order Thought Theory (HOTT; Lau & Rosenthal, 2011;Rosenthal, 2004) or Predictive Processing Theory (PPT; Friston, 2013) are not affected by the unfolding argument, as we show in the next subsection.
Second, one can choose to dismiss the zombie argument by claiming that zombies are in fact not possible (e.g., Dennett, 1991). In contrast, the existence of unfolded systems is a straightforward mathematical fact, and not a mere thought experiment. In fact, unfolding provides a recipe for creating empirically identical networks with different causal structures (for example, with arbitrarily high ϕ; see Appendix A and B). As mentioned, there even are real-world cases of feedforward and recurrent agents performing the same complex task (see Section 2.1). Hence, for example, the fact that we never have observed an unfolded cortex in practice (and probably never will) is not by itself a sufficient argument to call into question the unfolding argument. Furthermore, even though unfolded brains are impractical, we explicitly showed that the unfolding argument does not rely only on unfolded whole brains (see the previous example with the auditory system). This example can be scaled down again to the smallest part of the brain proposed to be relevant for consciousness by a given causal structure theory. A. Doerig, et al. Consciousness and Cognition 72 (2019) 49-59 2.6. Non-causal structure theories of consciousness are not subject to the unfolding argument Global Workspace Theory (GWT; Baars, 1997;Dehaene & Naccache, 2001), Higher-Order Thought Theory (HOTT; Lau & Rosenthal, 2011;Rosenthal, 2004) or Predictive Processing Theory (PPT; Friston, 2013) are examples of functionalist theories of consciousness: they focus on functions proposed to be crucial for consciousness. The unfolding argument does not apply to these theories because they propose that systems are conscious insofar as they implement the right kind of function -independently of the causal structure. Of course, these theories are usually couched in terms of recurrent or top-down processing, or other seemingly causal-structure terminology, but they can be formulated in other kinds of networks too. The unfolding argument only applies to theories in which reccurence per se (or another proposed causal structure) is necessary and sufficient for consciousness.
For example, the typical description of GWT is that consciousness occurs when cortical areas, which code for certain contents of consciousness (e.g., sensory areas), "broadcast" their information in a global neuronal workspace that consists of highly recurrent fronto-parietal areas, thus making these contents globally available for widespread use by other areas. The crucial functions here are (a) the creation of contents in sensory areas and (b) making these contents globally available for widespread use by other areas (i.e., broadcasting). GWT is usually explained with recurrent networks. Still, equivalent feedforward networks can maintain the same broadcasting function (see Fig. 4 for a toy model). 3 Similar toy models are easily produced for HOTT, PPT and all other functionalist theories. What matters is the function, e.g., broadcasting, but not the (neural) implementation. In summary, functionalist theories differ importantly from causal structure theories in that they propose functions as crucial for consciousness, independently of their implementations. For example, an unfolded global workspace network retains the crucial function of broadcasting. Hence, by their very nature, functionalist theories are not subject to the unfolding argument.

Unfolding & the correlation approach
For many researchers, consciousness is non-physical and cannot be studied using input-output functions, and therefore might be impossible to explain with standard neuroscience (Chalmers, 1996). In this case, it is impossible to study consciousness directly. However, consciousness may still be linked to neural states by bridging principles based on correlations (Chalmers, 2004;Varela, Fig. 4. A feedforward toy model of Global Workspace Theory. First, two feedforward networks process incoming sensory information, representing for example two different sensory modalities. Both project to the "global workspace", which is simply another feedforward network maintaining activity through time by copying each layer to the next. At any time step (i.e., at any layer), the global workspace may be "queried" by one or several other processes, thus making the information globally available for other areas (broadcasting). Each of these other areas can be implemented simply by applying one feedforward network implementing the relevant function to the "global workspace". Hence, this model fulfills the crucial functions proposed by GWT with a different causal structure, and the unfolding argument does not apply. There are many other ways to implement GWT in a feedforward fashion too. These networks can be "folded" back into recurrent networks and retain the same crucial functions. Hence, GWT can be implemented equivalently in recurrent and feedforward networks and does not face the unfolding argument. 1996). For example, we may find correlations between human reports about their conscious experience (first-person data: I am experiencing a face) and observable properties of the brain (third person data: neural activity in the fusiform face area). To quote Chalmers (2004): «In the case of consciousness, we can expect systematic bridging principles that underlie and explain the covariation between third-person data and first-person data.» This approach cannot hold for causal structure theories because of the unfolding argument (Fig. 5). As mentioned, there are infinitely many equivalent systems that produce exactly the same first person reports as humans, but with completely different causal structures. Therefore, linking conscious properties with the brain's causal structures by relying on first person reports cannot succeed (Fig. 5). The correlation approach cannot work with causal structure theories, although it may (or may not) succeed for other theories of consciousness.

Concluding remarks
To be considered scientific, IIT and other causal structure theories require empirical support. However, the unfolding argument shows that they are either false or outside the realm of science. For the same reason, different causal structure theories cannot be compared with each other. For example, different mathematical formulations of IIT's axioms lead to different predictions about which systems are conscious, but we cannot compare them because the predictions are doubly dissociated from empirical data. Proponents of IIT have previously acknowledged that feedforward and recurrent networks can be functionally equivalent but have different consciousness, according to IIT (Oizumi et al., 2014). In other words, they share the same uncontroversial starting point as we do. However, conclusions differ strongly. Proponents of IIT suggest that this should prompt us to focus on the subjectivity of consciousness. In contrast, we conclude that adopting a causal structure theory precludes any experimental approach to consciousness. Indeed, we have shown that all possible experimental results, including the ones focussing on subjectivity, do not depend on causal structure.
The unfolding argument rules out a class of explanations of consciousness wherein consciousness supervenes on causal structures. This should prompt us to turn our attention elsewhere in trying to understand consciousness. In this respect, the unfolding argument suggests that consciousness must be explained on a more abstract level than that of neural wiring. Indeed, any proposed framework based on neural connections suffers from the unfolding argument: any network can be replaced by equivalent feedforward networks with different connections that lead to identical empirical observations about consciousness. Only theories that abstract away implementation details and focus on explaining which kinds of functions are important for consciousness can avoid these challenges. To remain within the realm of science, consciousness must be described in terms of what it does, and not how it does it.  5. (a) The correlation approach proposes that we can find correlations between physical properties (such as the brain's causal structure, left) and first person reports about conscious states (right). If this is true, we can have a theory of consciousness even if conscious states are taken as unobservables. (b) However, the unfolding argument shows that there are infinitely many equivalent systems leading to the same first person reports but with different causal structures. Therefore, the correlation approach cannot be used to link causal structure theories with empirical data. Left: A feedforward network implementing a function F with ϕ = 0. Right: A network implementing the same function F with arbitrarily high ϕ. In the "original" feedforward network (left), the function F can be decomposed into two successive functions: a first function f α from the network input to layer L α , and a second function f β from L α to the output layer. We have f β ∘ f α = F, with ϕ = 0. Now, we increase ϕ by inserting a ϕ increasing device after layer L α and we subsequently counteract its functional effect using a feedforward network (orange neurons). In the modified network (right), there is the same function f α from the network inputs to activities of layer L α (bottom blue part). Then, processing continues in the ϕ increasing device, which implements a function g. g is cancelled by a subsequent feedforward network, which implements the function g −1 (g −1 exists on the image of g if g is injective). These steps are represented in orange. Lastly, the rest of the network is unchanged and implements the function f β from the output of this entire process to the networks output (upper blue part). We have f β ∘ g −1 ∘ g ∘ f α = F, with an arbitrarily high ϕ > 0. The ϕ of the network is entirely determined by the ϕ increasing device because all other parts of the network are feedforward. (b) Networks with identical function and arbitrary phenomenology. Left: The same feedforward network as in a), implementing the function F with ϕ = 0 (so there is no conscious content). We have f β ∘ f α = F, with no conscious content. Right: Because IIT provides a principled way to determine which connectivities and states produce which qualia, we can slightly modify the example above to give arbitrary qualia to networks with the same function. We play the same trick as before but, instead of inserting a ϕ-increasing device after layer L α , we insert a network proposed by IIT to produce certain qualia (for example, the quale of smelling ham). This network applies the function h to the outputs of layer L α . Just as before, the function can be inverted by a feedforward network implementing h −1 (h −1 exists if h is injective). Overall, we now have f β ∘ h −1 ∘ h ∘ f α = F, with the quale of smelling ham (or any other quale). Using a feedforward network to cancel the functional effect of inserting our quale-of-smelling-ham network does not simultaneously abolish the quale of smelling ham because, being feedforward, it is not part of the maximal ϕ complex. smelling ham by any other quale, for example, by the quale of seeing a cat. However, this would be just a coincidence, showing that conscious percepts and input-output functions are doubly dissociated.
As mentioned, any function can be implemented by many different systems, with different causal structures. Hence, there are many examples of systems performing the exact same function with arbitrary qualia.