Evaluating the explanatory power of the Conscious Turing Machine

The recent “ Conscious Turing Machine ” (CTM) proposal offered by Manuel and Lenore Blum aims to define and explore consciousness, contribute to the solution of the hard problem, and demonstrate the value of theoretical computer science with respect to the study of consciousness. Surprisingly, given the ambitiousness and novelty of the proposal (and the prominence of its creators), CTM has received relatively little attention. We here seek to remedy this by offering an exhaustive evaluation of CTM. Our evaluation considers the explanatory power of CTM in three different domains of interdisciplinary consciousness studies: the philosophy of mind, cognitive neuroscience, and computation. Based on our evaluation in each of the target domains, at present, any claim that CTM constitutes progress is premature. Nevertheless, the model has potential, and we highlight several possible avenues of future research which proponents of the model may pursue in its development.


Introduction
In a recent paper, Blum and Blum (2022) present the Conscious Turing Machine (CTM) as a "single abstract substrate-independent model of consciousness" (Blum & Blum, 2022, p. 1).One of the major selling-points for the explanatory appeal of CTM model can be found in the latter two thirds of its name: Conscious Turing Machine.At the outset, it seems to assert that Turing Machinesthe most widely studied mathematical model of computationcan inspire a model of consciousness.There are several reasons why this prospect is attractive for the scientific study of consciousness.First and foremost, it would anchor interdisciplinary consciousness research in theoretical computer science (TCS), and enable the former to draw from the rich insights that the latter has produced for nearly a century; from the foundational work by Gödel (1931), Church (1936), and Turing (1936) regarding the nature of computability, to the vast body of computational complexity results which concerns the intrinsic "hardness" of computational problems in terms of resource usage (Cook, 1971;Edmonds, 1965;Karp, 1972;Levin, 1973).If we have reason to believe that CTM satisfactorily captures consciousness (or some central aspect of it), and that CTM can be simulated by any Turing-complete system, it follows that consciousness is computational in nature and, consequently, subject to the same constraints that apply to computational systems.If this line of argument is sound, its conclusion could pave the way for a new era of computational investigations into consciousness.
Immediately, there was some skepticism about the work this model could do for us with respect to understanding consciousness.For instance, Oliveira (2022) pointed out well known problems related to functionalist accounts of consciousness.Nevertheless, novel accounts and new approaches are few and far between in interdisciplinary consciousness studies, sowhile we share some of the same worries as Oliveiraa deeper look at the account proposed by Blum and Blum (henceforth referred to as B&B) is worthwhile to understand properly its conceptual framework, delineate its scope, and assess its prospects of positively contributing to an explanation of consciousness and/or related phenomena.Reasonably, the potential scientific contribution of CTM depends on its explanatory power in a target domain.Accordingly, the value of CTM depends on what it should be considered a model of.Initially, it is evident that B&B view CTM mainly as a model to define and explore consciousness and contribute to the solution of the hard problem (c.f. the quote above, and Blum and Blum, 2022, passim).Consequently, on the first pass, the target domain of CTM should be taken to be consciousness, in the sense that is invoked in the hard problem as discussed in the philosophy of mind.However, B&B furthermore suggest CTM also pertains to related "concepts" and "phenomena" (2022, p. 1).Therefore, in addition to considering it in the domain of consciousness as found in the philosophy of mind, we here will consider the explanatory power of CTM in two further domains, namely the domain of cognitive neuroscience and the domain of computation.Put differently, our objective here is to investigate the explanatory power of CTM when viewed from the domain of philosophy, cognitive neuroscience, and computation, respectively.This will illuminate the scientific contribution of CTM for each domain, and equally important: clarify the problems whose solution it is a contribution to.
Our evaluation of CTM in relation to the three potential target domains will be discussed in sections two through four, starting with the philosophy of mind and proceeding to cognitive neuroscience and computation.Finally, in section six, we offer some concluding remarks and point to possible ways forward for proponents of CTM.

CTM and the philosophy of mind
According to B&B, CTM is a model taking its starting point in theoretical computer science (TCS), which in turn can "[…] add to the understanding of consciousness and related concepts, such as free will."(Blum & Blum, 2022, p. 1).In this section, we will consider CTM under the assumption that the target domain is the philosophy of mind.More specifically: the subdomain of consciousness within the philosophy of mind.The relevant notion in this context then is what B&B call 'the feeling of consciousness' since this (in their vocabulary) corresponds to consciousness proper and pertains to the hard problem.What B&B call 'conscious awareness' corresponds to what other theories take to be 'attention' (p.2), 1 and thus resides in the domain of cognition and belongs to the easy problems in the vocabulary of David Chalmers (1995).
B&B aim to provide a "simple abstract substrate-independent computational model of consciousness" (p.2), that builds upon the Global Workspace Theory (GWT) proposed by Baars (1988).Their model "formalizes mathematically (and modifies with dynamics) the GWT of consciousness" where "the stage of GWT is represented by short-term memory (STM) that at any moment in time contains CTM's conscious content.The audience members are represented by enormously powerful processors-each with its own expertisethat make up CTM's long-term memory (LTM)" (p.2).
B&B posit: "what gives CTM its "feeling of consciousness" is […] its global workspace architecture, its predictive dynamics (cycles of prediction, feedback, and learning), its rich multimodal inner language, and certain special LTM processors such as the Model of the World processor" (p.3).Thus, the explanation of consciousness provided by B&B contains four distinct elements.The posited elements are dynamics, a rich multimodal inner language called Brainish, its global workspace architecture and some specialized LTM processors, of which they highlight three.The three special LTM processors they highlight are the model of the world processor, an inner speech processor, along with processors handling varieties of other inner sensations (inner vision, inter tactile and so on). 2 In order to determine which of these elements is doing the heavy lifting in the explanation of consciousness, it is necessary to consider each in more detail.
Prima facie, the best candidate for a heavy lifter is Brainish.It is accentuated in relation to the feeling consciousness in several places, such as when B&B state that "the feeling of consciousness in CTM is a consequence principally of its extraordinarily expressive Brainish language […]" (p.7, italics added).To elaborate, Brainish is an inner language (similar to the classical concept of mentalese, see e.g.(Fodor, 2007)) that is both hyper-expressive and multimodal.It supposedly expresses and manipulates mental states (i.e.all 'inner' states, c.f. p. 9, or "images, sounds, tactile sensations and thoughts", c.f. p. 4) better than any 'outer' language, and deploys succinct multimodal words or phrases called gists 3 (p.4).Our ongoing experience, i.e. our stream of consciousness, corresponds to the time-ordered sequence of gists being broadcast top-down.At any given time t, the individual will sense that she experiences (e.g.'sees' in the case of 'visual' gists, p. 9) the gist(s) that is being broadcast at t. Indeed, it seems like Brainish is tailormade to account for the phenomenal aspects that are a core explanandum consciousness studies.It is hypothesized to be hyper-expressive, which handles the traditional issue of ineffability associated with phenomenal properties; it is held to be multimodal, which lets it account for any kind of mental state (e.g.thought or perception); and most importantly, it is embedded in every mental state (chunk), which ensures a close connection between cognition and conscious experience.Returning to the question at hand: what does Brainish contribute to explaining consciousness?On the surface, it certainly does seem like if mental states (chunks) all contain a gist (coded in Brainish), this would per definition account for the phenomenal aspects of CTM.However, when digging a bit deeper, a reason for doubt emerges.The reason is that the introduction of brainish does not appear to explain anything or lead to new understanding of the phenomenon (as is Blum and Blum's explicit claim, e.g. Blum & Blum, 2022, Supplementary Information, p. 6 and again p. 7).Rather, it appears, they merely redescribe the phenomenon.There are two possible cases.Either Brainish means the same (has the same intension, c.f. Chalmers, 2002) as the concepts we already use when we talk about the hard problem (e.g.phenomenal properties or 'what it is like' (Nagel, 1974)), which would be problematic, since simply introducing a new word with exactly the same intension does not suffice as an explanation.Alternatively, if the concept of Brainish is not coextensive with the concepts we ordinarily use when talking about the hard problem, then we need to know in which ways it differs and what difference the differences do.Put differently, any differences should be well explicated, motivated, and argued.These kinds of specifics are necessary for an evaluation of whether Brainish explains anything or leads to new understanding of the phenomenon.In other words, pending a full explication of Brainish that illuminates its differences from concepts already deployed in the way we talk about the hard problem, and how these differences are relevant and constitute novelty, it is unclear what exactly Brainish explains, and premature to claim it leads to new understanding of the phenomenon.Now, since these specifics presently are lacking, it seems that this option is some ways away for B&B.However, the first option is no less undesirable, since renaming a problem does not solve a problem.So going for the first option just means that explaining Brainish is the hard problem now.The question then is whether B&B tell us anything about Brainish apart from what it does.Unfortunately, the answer to this question is negative.All we are told are the various things Brainish can do, and that "having an expressive inner language is an important component of the feeling of consciousness" (p.4).Thus, it is tenuous to claim that Brainish does any explaining with respect to consciousness.Relatedly, it is, at best, unclear whether Brainish does anything to improve our understanding of the phenomenon.Now, if Brainish cannot lend support to the claims that CTM "contributes to the understanding of consciousness and related "hard problems"" (p.3), that it provides "a high-level understanding of how conscious experiences are or might be generated" (p.8), that it addresses "directly the hard problem of consciousness" (Supplementary information, P. 7), and explains "why systems that are subject to the laws of physics can have subjective experiences" (Supplementary information, P. 7), then perhaps there is support for these claims in the other three proposed elements (dynamics, workspace architecture, and the specialized LTM processors).Therefore, we will next consider these in turn.Starting with dynamics, which are roughly tantamount to the ideas of active inference and predictive processing (e.g.Friston, FitzGerald, Rigoli, Schwartenbeck, & Pezzulo, 2016;Hohwy, 2020;Hohwy & Seth, 2020), the first thing to note is that there are concerns about whether these ideas pertain to consciousness per se, or more properly should be seen as frameworks for cognition and learning (see also Schlicht & Dolega, 2021).Be that as it may, the question here is whether B&B's conception of dynamics can do the explanatory heavy lifting that we hoped Brainish would do.It is posited that CTM formalizes mathematically the GWT and modifies it with dynamics (p.2).B&B proceed to define dynamics as prediction, feedback, and learning (p. 6). 4 They further state that these are parallel processes (p. 7).It is left unclear, however, what exactly prediction, feedback and learning contribute to our understanding of consciousness.Without clarity on the details, how can we evaluate whether it improves our understanding?The details matter, especially since all these processes are instantiated in various AI systems that few believe are genuine experiential subjects.Furthermore, since these concepts are already widely discussed and have been elaborated upon in explicit and dedicated frameworks (Friston, 2013;Friston et al., 2016;Hohwy, 2020;Hohwy & Seth, 2020), it is questionable whether the inclusion of dynamics can even be considered as a contribution of anything new to the field, let alone a new explanation.So, dynamics does not seem to be able to do the work B&B need with respect to explanatory power in the domain of consciousness.
Turning to the special processors, the question is whether positing processors handling inner speech and carrying a model of the world can do the heavy lifting now that Brainish and Dynamics are off the table.With respect to inner speech, any explanatory power this has seems to piggyback on the explanatory power of Brainish, since this is the language inner speech deploys.Unfortunately, as we saw above, Brainish does nothing to advance our understanding of consciousness.What about the model of the world?B&B distinguish between models of the inner and outer worlds, and each world is taggedusing Brainishwith the kind of states they can have.In relation to this, especially important are tagging of parts of the inner world as "self" (p.7) and tagging of itself as "conscious" (p.8).This is reminiscent of other theories that connect consciousness to cognitive models representing the agent herself (e.g.Cleeremans et al., 2020).The question is whether B&B's version of this self-tagging is different from existing versions in ways that contribute new understanding with respect to consciousness.However, B&B merely stipulate that this happens, and that it is important.Hence, nothing new is added to the idea of systems applying models to themselves.In sum, there does not seem to be any foundation for claims of novel explanatory power in the way B&B explicate the special processors.
The last element is the global workspace architecture.B&B propose to amend the workspace idea with dynamics, brainish, and special processors.But as we just saw, special processors, dynamics, and brainish contribute little when it comes to understanding consciousness.So, if the modifications of the workspace architecture cannot do the heavy lifting, then perhaps it is the workspace architecture itself that does the explaining?To properly assess this would require turning to the source of the workspace theories and evaluate the work of its original authors, such as Bernard Baars, Stanislas Dahaene, George Mashour, and others.We will not do this here, since the mere fact that this is necessary is sufficient to demonstrate that any explanatory power of a global workspace is found not in the work of B&B, but in the work of the original authors.For the present purposes this lets us conclude that, in the context of B&B's project, workspace architecture contributes nothing new, because they have given us no reason to believe their modifications framework and e.g. the uptree competition as presented by B&B.Whether this improves their account of consciousness is unclear given that they do not elaborate on this.Elaboration is necessary because, as we show, the uptree competition in itself does not suffice as explananda.Furthermore, while an overlap with Dennett's idea would support the posit that B&B's account conforms to other theories, it also undermines their claims of novelty.
A. Kirkeby-Hinrup et al. elevate the supposed explanatory power of the extant workspace theories.
Finally, if no single element can cater to the claims of novelty and explanatory power, it is worth considering whether perhaps it is the combination ofor interaction betweenthe posited elements that is novel and do the explaining.There are indications that this may be the view of B&B (see e.g.footnote 3 above) and they have endorsed it elsewhere (personal communication), yet in many places the CTM model is formulated in ways that seem to go against this possibility.For instance, when they "[…] emphasize that the feeling of consciousness arises after processors, including especially the Model of the World and Inner Speech, receive the broadcast" (p.4), or when they say the "the stream of consciousness is the sequence of gists broadcast from STM" (p.9).Now, if one nevertheless wanted to argue that it is the interaction between the posited elements, one may point to the fact that some of the words and phrasings used by B&B when describing the elements discussed above may indicate such a view.In particular, one would then point to formulations such as "integrate" (p.3) "coupled with" (p.7), "as a consequence in part" (p.9).Undoubtedly, there is one way to read these wordings as indicating an 'interactionist' (as it were) view.Nevertheless, the fact that such a view is not explicitly stated (e.g. using words such as 'combination' and 'interaction') in the paper makes this interpretation somewhat strained.In the same vein, B&B does not flesh out this supposed interaction, outside of positing that some elements feed into others (e.g. the up-tree competition feeds into STM).Finally, and even worse, B&B do not give any indication of what such an interaction would contribute with respect to explaining consciousness.Consequently, even on a charitable reading of B&B positing that the combination or interaction between the posited elements is a critical part of the model, given the extreme lack of specifics, is there any support for their claims of explanation or novelty.Even worse, positing that it is the interaction between elements that is the important part opens the theory to a host of novel questions about what this interaction consists in and how it could yield consciousness.
To summarize, from the perspective of philosophy of mind, the claim that CTM provides novel understanding of consciousness falls flat.The most promising posit in this respect is Brainish, but as we saw above, the explication of Brainish provided in the paper amounts to merely renaming the phenomenon to be explained.One positive upshot of this is that a further explication of Brainish would be a promising avenue of future research for B&B.The other posited elements that are relevant for consciousness in their model are not novel (and hence provide no 'new understanding'), have been developed in significantly more detail by other authors, and there are extensive debates about whether and how these are even relevant to consciousness in the first place.

CTM and cognitive neuroscience
In this section, we evaluate the explanatory power of CTM in the domain of cognitive neuroscience.Importantly, the feature to which the explanatory power pertains is still consciousness.For the evaluation, we will deploy three questions.The first question is what new understanding or progress CTM can deliver when applied to empirical data generated by the field of cognitive neuroscience.Arguably, good explanatory power in this domain minimally should include explanations of paradigm cases that are well established in the field, and in particular the paradigm cases we are interested in due factors such as their relevance to the phenomenon (i.e.consciousness), or their generalizability (e.g.perception in neurotypical human adults).The second question is whether CTM can explain more data, or explain the data better, than extant theories.In other words, the second question is about novelty (whereas the first question concerned scope).Finally, one could argue that CTM constitutes progress, not on the empirical aspect of cognitive neuroscience, but instead on the conceptual toolbox of the field.Improving the conceptual tools of a field is at least in the vicinity of explanatory power.Accordingly, the third question is whether CTM contributes new ways of conceiving of the phenomenon, i.e. valuable conceptual tools, systems, or methodology.
B&B treat five different empirical phenomena: blindsight, inattentional blindness, change blindness, dreams, and free will.They suggest that the explanations they derive from CTM "provide a highlevel understanding of how conscious experiences are or might be generated" (p.8), and that the explanations draw confirmation by being consistent with extant literature in psychology and neuroscience (p.8).Outside of free will, the phenomena treated by B&B are certainly the kind of paradigm cases we are interested in, i.e. well established and generalizable (see above).Given the size of the field of cognitive neuroscience, there are of course many other empirical phenomena that would qualify as such paradigm cases.That there are paradigm cases that B&B do not treat is not a problem, given that it would be unfair to expect newly developed theories to immediately account for as much as well-established theories (and some well-established theories do not account for every paradigm case either).The fact that the empirical phenomena that CTM does treat are almost exclusively5 the kind of paradigm cases we are interested in is sufficient to answer the first question and confirm that the scope of CTM (given what can be expected of a recently developed theory) is more than adequate.In the following, we will consider three (blindsight, inattentional blindness, and change blindness6 ) of the five phenomena proposed by B&B in more detail and evaluate them in light of the second and third questions.
However, before delving into the details, a general caveat in relation to invoking blindsight, inattentional blindness, and change blindness as evidence for theories of consciousness is warranted.The caveat concerns whether these cases in fact lend support to CTM qua theory of consciousness, as opposed to just lending support to CTM as a framework for cognition.The crux of this caveat is that the exact value of each of the cases in consciousness research is still debated.For instance, it is still highly controversial whether blindsight is actually 'blind'.Some accounts have shown that blindsight simply may be degraded vision (Overgaard, 2012;Overgaard, Fehl, Mouridsen, Bergholt, & Cleeremans, 2008), and attempts have been made to account for all empirical findings from this perspective (Overgaard & Mogensen, 2015).Regarding the phenomenon of change blindness, this is often explained with reference to the quality of representations, or in an inability to compare pre-change stimulus information with post-change stimulus information (Brinck & Kirkeby-Hinrup, 2017;Simons & Ambinder, 2005;Simons & Rensink, 2005).Such explanations account for the phenomenon due to low(er) level cognitive facts independent of conscious processes.Thus it is unclear (at best) whether accounting for change blindness is unequivocal support for a theory of consciousness.Similar issues pertain to inattentional blindness, since the relationship between attentional processes and consciousness is the subject of an ongoing debate (see e.g.Cohen et al., 2012;Cohen & Chun, 2017;Shafto & Pitts, 2015;Usher, Bronfman, Talmor, Jacobson, & Eitam, 2018 for just a few examples.).Because the relationship between each of these cases and consciousness is as of yet unclear, care needs to be taken when assessing claims that explaining these phenomena amounts to support for a theory qua theory of consciousness.Importantly, we wish to underscore that this issue is far from unique to CTM, but in fact applies to almost any theory proposing various empirical phenomena as evidence in support of a theory.Recently, Schurger and Graziano (2022) pointedly have highlighted this issue as well.For instance, in relation to the Workspace view, they say: "One might argue, based on the above, that GNWT does explain something-it explains why there is a lapse in sensory information processing during the attentional blink, and why the observer can only entertain one interpretation of the Necker cube at a time.But that would be a red herring, vis-à-vis the phenomenon that the theory was supposed to explain-subjective experience."(Schurger & Graziano, 2022, p. 2).Finally, in relation to computational (or other formal) theories it is worth noting that the ability to simulate or model something is not tantamount to explaining it (Schurger and Graziano make a similar point regarding the ability to make predictions, see also (Doerig, Schurger, Hess, & Herzog, 2019)).Be this as it may, extant theories of consciousness have generally sought to account for this kind of evidence within their frameworks, which can be taken as a reason also to consider the accounts provided by B&B.Therefore, we will for now sideline the caveat and consider in more detail the treatment of the three phenomena offered by B&B.To reiterate, the questions we will use to evaluate these are the second and third questions introduced above, i.e. whether CTM can explain more data, or explain the data better, than extant theories and whether CTM contributes new ways of conceiving of the phenomenon, i.e. valuable conceptual tools, systems, or methodology, respectively. 7 Let us start with blindsight, a condition where subjects report not consciously seeing one or more objects in the environment but can nevertheless use information about the unseen objects in behavioral responses.B&B account for this phenomenon by positing that the relevant visual informationfor some reason ("due to some malfunction", p. 8) -does not get into the STM, and consequently is not broadcast, i.e. is therefore not consciously experienced (p.8).To explain the behavioral responses, B&B suggest that visual information can be shared unconsciously via links to other specialized processors.At its core this explanation consists in suggesting thatfor some reasonsome visual information is not conscious but can nevertheless be deployed by unconscious processes to guide behavior.
But this is not an explanation as much as it is merely redescribing the phenomenon using the vocabulary preferred by B&B.The phenomenon is characterized by some visual information not being conscious (entering and being broadcast from the STM in B&B's terms) but nevertheless being unconsciously available (shared via links in B&B's terms) to guide behavior.If this is right, it seems the account of blindsight offered by B&B cannot properly be assessed against the second question introduced above.In other words, we cannot assess whether their account explains more data or explains the data better than extant theories, because their account is not explaining the data but rather redescribing the data.Now, it might be that such redescription is relevant to the third question, since a redescription could contribute new ways of conceiving of the phenomenon.To see if this is the case, one needs to consider whether the redescription of blindsight in the vernacular of CTM could lead to new understanding of the phenomenon.For instance, it would have been very interesting if B&B had offered a concrete hypothesis of why the visual information never reached the STM.However, B&B remain vague and noncommittal with respect to this.There is one other possibility for their account to deliver new understanding: the remaining relevant concept invoked in the redescription, i.e. the links through which the visual information is shared (unconsciously) between processors.The definition of links offered by B&B is that they "[..] are channels for transmitting information between processors."(p.4), which when applied to humans just means connections between brain areas.When substituting 'links' with 'connections between brain areas' in the account offered by B&B it becomes clear that the redescription does not give any new ways of conceiving of the phenomenon, nor does it offer any novel and valuable conceptual tools or definitions.Consequently, the assessment of their account of blindsightin light of the third questionalso comes out negative.
Moving on to the explanation of inattentional blindness (IB) offered by B&B, we again start with the second question and consider whether it explains moreor explains the data betterthan extant theories.In inattentional blindness, subjects engaged with a cognitively demanding task fail to notice otherwise salient elements presented in the focal visual field.Within the CTM framework, IB is explained by reference to the relative weights of gists in their competition to enter the STM.The claim is that task related gists have significantly higher weights, as compared to task-unrelated gists (p.8).This has the effect that subjects fail to perceive task-unrelated elements of the visual scene that otherwise would have been salient.The driver of the explanation of IB is the 'weights' assigned to (chunks carrying information about) incoming stimuli, where the weight determines a given stimulus' probability of reaching the STM and thereby becoming conscious (by being broadcast).Thus, the relative difference in weight between task relevant and task irrelevant stimuli accounts for IB on CTM.Now, in one sense this account is trivial, in that it merely restates that task related stimuli are likely to 7 One might object that explaining"more" or"better" is an unreasonably high bar to set in our evaluation of the account offered by B&B.We think this objection fails to take into account that B&B claim that CTM constitutes progress, which seems to imply that it provides us with "something more" than we already have.become conscious and that task irrelevant stimuli are significantly less likely to do so.This is roughly tantamount to redescribing the phenomenon in their own vocabularyand as we have noted abovedoes not amount to an explanation.It certainly does not explain "more" or "better" than other theories (see e.g. de Pontes Nobre, de Melo, Gauer, & Wagemans, 2020;Hutchinson, 2019;Jensen, Yao, Street, & Simons, 2011 for reviews of IB).However, when it comes to the third question, the account offered by B&B opens for possibly interesting questions.According to B&B, the relevant weights are assigned to incoming stimuli by CTM's vision processor (p. 2), which presumably has to be a very low-level early cognitive process since the weights are already assigned when a chunk enters the up-tree competition (p.5).This seems to imply very little initial top-down influence on weights (unlike a hypothesized top-down bias, as in workspace theories.See e.g.Hutchinson, 2019), since top-down feedback seemingly only interacts with the processor at later iterations to correct their weight assignment algorithms (p. 6).It would certainly be interesting if B&B specified in more detail the weight assignment process in future work.However, and importantly, this issue pertains to the so-called 'selection problem' rather than consciousness per se.The selection problem concerns how the brain selects which states become conscious.Thus, there is a sense in which the selection problem is certainly relevant to consciousness.However, it is also possible that it can be accounted for independently of consciousness, and may be explained in wholly low-level cognitive processes, and consequently contributes little to an understanding of consciousness simpliciter.Nevertheless, the possibility for interesting and novel developments in relation to the selection problem certainly allows a tentatively positive answer to the third question, in that here CTM does introduce novel ways of conceiving of the phenomenon.
Finally, let us consider change blindness (CB), a phenomenon in which subjects fail to notice otherwise salient differences between visual stimuli.CB can be elicited in a range of different paradigms that in various ways mask attention grabbing visual transients that if unmasked would alert subjects to the changes in the stimulus (e.g.Beck, Rees, Frith, & Lavie, 2001;Busch, 2009;Grimes, 1996;McConkie & Loschky, 2003;Rensink, O'Regan, & Clark, 1997;Simons & Levin, 1997).The version of CB discussed by B&B is the socalled whodunnit video 8 in which several changes to a natural scenethat are rarely detected by viewersoccur over the (1.54 min) timespan of a murder-mystery video.B&B explain the failure to detect changes in the whodunnit video by reference to two factors (p.8).The first factor is that the video is cleverly edited to eliminate transitions between changes, which prevents the visual processor of CTM to register that changes have occurred.This conforms to the standard interpretation of why CB occurs (as described above).The second factor is more interesting, in that it posits that the same gist describes both the beginning and the end of the video equally well.The example they give is "The living room of a mansion with detective, butler, maid, others, and a man apparently dead on the floor" (p.8).This is interesting for two reasons.The first is that it suggests a very sparse view of phenomenology (see e.g.Block, 2011Block, , 2014;;Knotts, Odegaard, Lau, & Rosenthal, 2019;Kouider, De Gardelle, Sackur, & Dupoux, 2010 for details on the rich-sparse debate).To see why this is so, remember that in the CTM framework gists are 'written' in brainish (p.4), and brainish is what accounts for phenomenology, so if the same gist can describe vastly different visual scenes (in terms of low level properties like colors and shapes), this seems to entail a very coarse-grained phenomenology.The other reason this account is interesting is that by allowing a role for gists in the explanation of the phenomenon, B&B seem to suggest that the experiences in themselves are part of the reason subjects do not detect changes.This is significantly different from most extant theories, that tend either to suggest pre-conscious cognitive processes are the reason changes are not detected (as described above), or hold that the changes are experienced, but cannot be accessed (Block, 2011).In sum, B&B do not explain "more" or "explain better" with respect to CB, meaning the answer to the second question is again negative.However, our considerations do suggest that their account of change blindness has interesting consequences and introduces radical new views that may constitute new understanding or possible avenues of research.Certainlyto our minds-further development and clarification regarding how B&B view the role of gists in visual tasks would be worthwhile.
Rounding off our evaluation of the explanatory power of CTM in the domain of cognitive neuroscience, we first concluded that CTM fared well on our first question concerning scope.In our detailed treatment of the phenomena, we concluded that while the answer to the second question on each of the three phenomena considered here is negative, the account given by B&B fares well on the third question in that their explanations has interesting and novel upshots, that deserve further clarification and exploration. 9Finally, we also feel it necessary to reiterate three things.First, the general caveat we introduced to this kind of evidence applies equally to almost every theory of consciousness, and therefore cannot be considered as a problem that is unique to CTM.Second, for expository reasons, we have not treated the accounts of dreaming and free will given by B&B.Both of these phenomena are thorny questions (in neuroscience as well as in other domains) and certainly deserve separate treatment to evaluate their potential contribution to the advancement of our understanding of them.Thirdly, as noted in footnote five, we have set a high bar for the explanations offered by B&B because they claim that CTM constitutes progress.We want to underscore that when disregarding this high bar, their accounts are roughly on par with existing accounts.

CTM and computation
In this section we evaluate the explanatory value of CTM when viewed as a model of computation.This means central questions in the context will be, for instance: how does CTM relate to the theoretical foundations of computability?What is the "computational" part of the specific CTM features thataccording to B&B -make it conscious?What are the computational problems that CTM solves, and what are the resources that CTM employs to solve them?There is a challenging dual aspect to these questions.On the one hand, if 8 Mayor of London, "Test Your Awareness: Whodunnit?"(2008).https://www.youtube.com/watch?v=ubNF9QNEQLA&list=PLr4EeJcghrfSnnBO8YFu0qnz4IrpQEUaZ&index=1 (reference as provided by B&B).
9 And to reiterate, B&B easily satisfied the first question.
A. Kirkeby-Hinrup et al. part of the explanatory value of CTM consists in its connection to TCS, then its relationship to the foundations of computability and computational complexity must be explicated e.g., in terms of formal notions of what CTM can do, what it can do efficiently, the nature of the problems it solves, how it solves them, and the resources it employs.At the same time, since CTM explicitly aims to model consciousness, it is necessary to address how the computational side to these questions relates to features of consciousness.In light of this this, the aim here is to evaluate if CTM can be a useful model of consciousness that is "sufficiently computational" in the sense that enables the model to be fruitfully integrated with theoretical computer science, while being "sufficiently conscious" in a sense that makes it valuable for interdisciplinary consciousness research.
For the uninitiated reader, the foundation of computability theory is perhaps best told via the self-referential Liar Paradox: "this sentence is false".If the sentence is true, it is false; but if it is false, then it must be true.The paradox has not only troubled philosophers' attempts to give a theory of meaning (Beall, 2007;Tarski, 1944), but it also plays a central role in the fundamental results in computability theory provided by Gödel, Turing, Church, and others.Three classical challenges loom large in this context (Hilbert, 1930).The first concerns completeness; i.e., to find proof that every true mathematical statement can be derived from the axioms of (some) formal system.The second concerns consistency; which means that no contradiction can be derived from the system.The third is the prospect of a mechanistic procedure that can unequivocally decide the truth or falsehood of any mathematical statement of the system.This is known as the decision problem ("Entscheidungsproblem").Unfortunately, Gödel's incompleteness theorems (Gödel, 1931) are widely accepted as providing negative answers to the first two challenges: any consistent formal system capable of carrying out basic arithmetic is incomplete (first theorem), and any such system cannot prove its own consistency (second theorem).In his proof, Gödel's utilizes the self-referential move found in the Liar Paradox -"this sentence is not provable in the system" -from which he derived incompleteness. 10Inspired by Gödel's proofs, Turing (1936) and Church (1936) independently provided negative answers to the decision problem.
This leads us to the first pillar of computability, which has two sides: we can (i) define the mechanically solvable problems (computable functions) in terms of decidability, and (ii) based on this notion, formulate general statements about mathematical models of computation.This is captured in the widely accepted Church-Turing thesis, which states that every "effectively calculable function" can be computed by an algorithm, if an only if it can be computed by a (deterministic) Turing Machine. 11To this day, every known attempt to give a precise analysis of what is meant by "effectively calculable function" has turned out to pick out the class of functions computable by a TM (Copeland, 2020;Kleene, 1952).Meanwhile, the vast generality and applicability of the TM model has gained further support due to the observation that no alternative model of computation seems to be significantly faster than a TM.For instance, the widely believed extended or complexity-theoretic Church-Turing Thesis, states that a TM can efficiently simulate any other realistic model of computation, where "efficiently" means within a polynomially bounded overhead (and correspondingly: no model of computation can be super-polynomially faster than a deterministic TM). 12  Since the Gödel-Turing-Church results remain at the very heart of theories on "what computers are" based on "what they can do", they arefor obvious reasonsinteresting for discussions on whether the human mind is a computer.This is the core of the controversial Lucas-Penrose argument or "anti-mechanist thesis" (Lucas, 1961;Penrose, 1989), which, for more than six decades has fueled mind-machine discussions based on Gödel's results.For instance, from the premises (i) if there is something that a mind can do and a machine cannot, the mechanist thesis is false; (ii) machines cannot prove the Gödelian sentence; and (iii) minds can prove the Gödelian sentence, Lucas concludes that the mind is not a machine, writing: "any rational being could follow Gödel's argument, and convince himself that the Gödelian formula, although unprovable-in-the-given-system, was nonethelessin fact, for that very reasontrue."(Lucas, 1961, p. 115).
Critique of the Lucas-Penrose argument comes in various flavors: e.g., that human minds cannot establish their own consistency, or that they are inconsistent (Hutton, 1976;Putnam, 1960;Yates, 1971); that we do not know enough about the mind to formulate a Gödel sentence and thus cannot decide its truth (Benacerraf, 1967;LaForte, Hayes, & Ford, 1998;Lewis, 1969).The relevant question for our investigation here is: what sort of model of computation is CTM?And the relevant corollary is: does CTM bring us any closer to resolving the Lucas-Penrose debacle?For the model presented by B&B to give us sufficient detail to properly assess these questions would require us to first establish: what is the Gödelian sentencehalting problem or liar paradoxfor CTM, and how does it relate to the halting problem for TMs?But the available options immediately present an intriguing dilemma.Consider the following statements 13 : A) TM sim Human Consciousness (HC) B) TM sim CTM C) CTM sim HC D) CTM sim TM (but not TM sim CTM) The truth of A implies a strong form of computationalism; i.e., all aspects of consciousness can be simulated by computation (i.e., 10 The second theorem is then obtained by formalizing the proof of the first within the system itself. 11Any system that can realize a TM is called Turing-complete.  1Similarly, the closely related Invariance Thesis, states that any reasonable model of computation canwithin a polynomial amountsimulate any other reasonable model of computation.See (Johnson & Garey, 1979) for the textbook treatment, and Van Rooij (Van Rooij, 2008) for an exposition of the Invariance Thesis. 13By"sim" we mean that system A can simulate system B iff A can mimic every relevant aspect of B.
TM is 'consciousness complete').If B is true, the explanatory connection to one pillar of TCS is secured: our general understanding of computability (TMs) can be used to realize conscious TMs.If C is true, then CTM's explanatory value with regard to consciousness (HC) is secured: CTM provides a "true" model of consciousness.However, the truth of B and C implies the truth of A. That is, it would be very odd to say the least if a TM can realize CTM, which in turn can realize HC, but a TM cannot realize HC.This either forces us to assert the truth of (A) -which is at odds with Lucas-Penrose implication, or leads to problems with A's decidability (e.g., we need a complete description of the human mind).Be that as it may, simply conjecturing the truth of (A) does not seem to capture what B&B have in mind.On the one hand, the parallelized RAM-architecture presented by B&B, along with various other features of the model, seem very much TM-like. 14On the other hand, B&B (2022, p. 3) write: "Although inspired by Turing's simple yet powerful model of a computer, the CTM is not a standard TM.That is because what gives the CTM its "feeling of consciousness" is not its computing power nor its input-output maps but rather, its global workspace architecture, its predictive dynamics (cycles of prediction, feedback, and learning), its rich multimodal inner language, and certain special LTM processors such as the Model of the World processor."(emphasis added) Similar to the discussions above, much hinges on how these additional factors relevantly modify CTM, as compared to a 'mere' TM.Returning to the four statements, to establish the explanatory value of CTM, one might opt for (D).But further claim that (D) entails something which is much weaker than the trio of (A), (B) and (C) -that CTM can simulate all relevant aspects of a TM, but not the other way around.This would amount to holding that CTM can realize a TM, along with additional features that are (or at least may be) incomputable by a TM (i.e.consciousness).Put differently, the claim is that the unique features that makes a TM a TM are realizable by CTM, but it is not the features that makes CTM uniquely a CTM (including the aspect that allegedly makes it conscious).A similar alternativewhich seems to be B&B's positionis to modify (B) and say that some special TMs can realize CTM (and in turn HC).However, by specifying this TM, it would inevitably lose the generalizabilitye.g., the Church-Turing thesis and Invariance Thesisthat makes the standard TM theoretically attractive.
Another option is to accept the (A)-(B)-(C) package but concede that Gödelian sentences are the wrong way to explain the behavior of CTM, just as they are irrelevant for the understanding of consciousness.This move would be similar to many objections to the Lucas-Penrose argument (Boyer, 1983;Coder, 1969;Dennett, 1972).As Dennett writes (Dennett, 1972, p. 530): "[…] men do not sit around uttering theorems in a uniform vocabulary.They say things in earnest and in jest, make slips of the tongue, speak several languages, signal agreement by nodding or otherwise acting nonverbally, andmost troublesome for this accountutter all kinds of nonsense and contradictions, both deliberately and inadvertently." The point of this move is that, evaluating CTM in light of the limits of Turing-computability, fails to capture its core characteristics.On the assumption that the core characteristics are ecological, CTM may very well be incomplete or inconsistentin the very same way a TM is in virtue of self-referential statements about its own behaviorbut this is unimportant because it has no crucial impact on its ecological function.To elaborate, the kind of CTM that is realized by human brains may be more geared toward the ability to surviveforage, plan, navigate, reproducein a complex world, where Gödelian sentences are of less concern; they only arise for those who try to establish the completeness, consistency, and decidability of all mathematical statements.Thus, while this option would save the mechanistic thesis (A-B-C), it would do so by saying that the Gödel-Turing-Church results are irrelevant for explaining consciousness.Unfortunately, given that B&B highlight the explanatory power of theoretical computer science with respect to consciousness, doing away with the Gödel-Turing-Church results in the process of explaining consciousness would be a dramatic move.
In sum, as it stands, while it is unclear how CTM relates to standard TMs (e.g., in form of Turing equivalence or Turing completeness) and computability theory, it brushes aside the fact that the available options either (i) already presuppose the truth of computationalism, or (ii) entail something which is trivial from an explanatory point of view: that some specialalthough not generalizable -TMs can realize a CTM (weak B); that CTM can realize all TM-computable functions, but standard TMs cannot simulate CTM (D); or that, while TMs cannot simulate consciousness (A-B-C), the limits of computability are irrelevant for consciousness (doing away with core tenets of TCS).
While computability theory covers what computational systems "can do", another fundamental pillar of TCS concerns what computational systems can or cannot do efficiently.The contributions of Cobham (1964), Edmonds (1965), Cook (1971), Karp (1972), Levin (1973), and many others have demonstrated the natural appeal of defining tractable problems as "solvable in polynomial time", and to distinguish them from intractable problems (that require super-polynomial time).This helped to establish hierarchies of the computational complexity of problems in terms of the resources they require, the most famous being the divide between P (solvable in polynomial time) and NP (checkable in polynomial time).Importantly, computational complexity gives us insight into which problems can be solved "fast", and which problems are "hard", based on the time (number of "state transitions" or "machine operations") and space (memory, e.g., bits) they require.This is based on the insight that time and space are fundamental resources for computational systems.Beyond its theoretical importance, it is of immense practical value as complexity results yield concrete constraints for what 14 For instance, if we view CTM simply as a parallelized random-access machine (unlike the classic Turing Machine, a random-access machine can access arbitrary memory locations in one step O(1)) running in O(n^k), then due to the Invariance thesis (Johnson & Garey, 1979), we have every reason to believe that it can be simulated by a TM in time O(n^ck), where c is any constant bounded by a polynomial function.Importantly, while it can in principle be simulated by a TM, the parallel RAM architecture is an important feature of the CTM, since cognitive economy and processing speed (in polynomial time) are important evolutionary and ecological factors respectively.We are grateful to an anonymous reviewer for pointing this out.
A. Kirkeby-Hinrup et al. computers can do, and in virtue of the Invariance Thesis and Extended Church-Turing Thesis, this holds regardless of any specific model of computation.In light of this, the next step in a proper evaluation of CTM is to consider complexity and how it applies to CTM.
B&B claim that limited resources "enter into fixing the detailed definition of CTM" and "play a crucial role in our high-level explanations for consciousness-related phenomena" (p.2).However, they only elaborate on the role of complexity considerations in one aspect of the model: the up-tree competition (which decides which chunk enters into STM and becomes globally broadcast).Now, if updating a chunk at a node in the up-tree should be done in one clock tick, it yields (a) a bound on the amount of computation that can be done in a node, and (b) a bound on the size of the chunks.In particular, the space requirement of a chunk must "be large enough to store a log2N bit address and to store a gist whose length is no greater than what is required to store approximately one line of English or its equivalent in Brainish, very roughly 2^10 bits."(p. 6).From a cognitive perspective, this makes intuitive sense: it is reasonable to hypothesize that there are constraints on the selection mechanism(s) that determines what enters the limited short-term memory of a conscious system.However, since B&B posit that consciousness is not the broadcasting itself, but rather the reception of the broadcast by the LTMs (p. 2), the up-tree competition does not appear to have critical explanatory value with respect to consciousness as such.In principle, the up-tree competition could be substituted for something else that fed the broadcasting mechanism and it would have no impact on consciousness. 15erhaps more problematically is that B&B's appeal to resource constraints goes against how complexity considerations are generally utilized in computational cognitive science (Frixione, 2001;Marr, 1982;Van Rooij, 2008).For instance, Marr's influential framework for analyzing cognitive systems on three distinct levelscomputation, algorithm, and implementationshows the intuitive appeal of starting from the problem a system solves as opposed to the exact mechanisms it uses to solve the problem (Marr, 1977(Marr, , 1982)).
To that end, how can we even attempt to understand how a computational system workse.g., the algorithms it utilizes and how they are implemented in neurons or circuitsunless we have some conception of the underlying problem it seeks to solve (computational level)?In turn, this have motivated many cognitive scientists to use computational tractability as a constraint to narrow the vast space of possible computational-level theories on human cognitive capacities. 16Simply put, if we believe that human minds are finite systems with limited resources, tractability can support cognitive psychology by carving out feasible theories regarding how the mind works; but to do so, we need to start from precise definitions of what it does.Analogous to the discussion on decidability, it remains unclear what problems CTM "solves" at the computational levelas it is commonly conceived in Marr's framework and in computational cognitive science more broadlyor whether it presents a computational-level theory at all.Its added explanatory value with regards to computational complexity and tractability, if any, seems to solely touch upon the algorithmic-level aspects that concerns the coordination of information from LTM processes to STM; which, again, does not appear to add any explanatory value for consciousness.

Concluding remarks
We have provided a critical assessment of the explanatory value of CTM with respect to consciousness in three domains: philosophy of mind, cognitive neuroscience, and computation.Regarding the first domain, we found that the model does not provide any novel understanding of what consciousness is, as it depends critically on the role and details of Brainish (which remains insufficiently explicated) and otherwise appeals to elements that are elaborated in more detail in other works.In the second domain, cognitive neuroscience, we found that, although CTM can account for paradigmatic cognitive phenomena, it does not explain them in a way that is better than existing theories.Similar results are obtained for the third domain: the model presently fails to deliver on its promise to satisfactorily connect consciousness research to theoretical computer science.Specifically, we have given reasons to think CTM fails to deliver on its promise to be a model of consciousness that offers explanatory value (or progress) in terms of its relation to TCS.When viewed from the domain of philosophy of mind, this failure stems from an underspecification of central concepts, that we take to be critical to our understanding of the phenomenon.However, we have highlighted that extant theories of consciousness suffer from similar issues, so this is not a problem unique to B&B's account (except in so far as the claim CTM constitutes progress).Similar issues recurred when considering CTM in the domain of cognitive neuroscience.While CTM can explain paradigmatic cases discussed in this field, CTM does not avoid a fundamental problem that haunts these debates, namely determining what relevance (if any) these cases have to consciousness in the first place.Again, this is not an issue unique to B&B's approach (except in so far as it pertains to a claim of progress).Finally, when considered from the domain of computation, we highlighted central tensions the model.In particular, we highlighted B&B attempts to import the insights TCS has contributed to the field of computer sciencee.g., decidability and computational complexityinto the field of interdisciplinary consciousness research, without explicating how and whether the same insights apply, or whether they are even relevant.CTM also brushes aside the possibility of inevitable trade-offs: if CTM should harmonize with TCS yet be a valuable model of consciousness, should we have a model that is consistent with the theoretical foundations of computer science, or, one that is consistent with our best understanding of human consciousness?As we have seen, choosing one might only come by sacrificing the value of the other.However, while these overwhelmingly negative results certainly invite skepticism concerning CTM, they should not be a taken as a reason to throw away the baby with the bathwater; rather, they point to areas where there is needand roomfor improvement.After all, would anyone expect that the first step towards a computational model of consciousness could be satisfactory in every respect?In any case, to our minds, what one should look for in any new model of consciousness is not perfection but rather potential.So, while our evaluations in the different domains largely converge on the conclusion that any claims that CTM constitutes progress are premature, they have also turned up a range of interesting questions or issues that may inspire further development and clarification of the CTM model.Therefore, to underscore that CTM has potential we here offer four avenues of future research. 17he first thing we wish to highlight is the descriptive complexity of Brainish.As discussed in section 2, Brainish seems to be essential for the "feeling of consciousness" in CTM.We concluded that Brainish was in need of further specification to have any explanatory power in the domain of philosophy of mind.In section 3, an upshot of B&B's explanation change blindness seemed to be that the CTM advances an extremely sparse view of perceptual experience.One possible issue with Brainish is the language's extraordinary featurese.g., its ability to "express and manipulate images, sounds, tactile sensations, and thoughts-including unsymbolized thoughts" (p.4) better than any 'outer' language.This raises some obvious red flags for complexity theorists.In fact, based on a wealth of results from the field of descriptive complexity theorywhich describes complexity classes in terms of the logic required to express the languages in themcomes the wisdom that a language's expressiveness directly corresponds to the problems it can describe.For instance, Fagin's theorem (Fagin, 1974) established that the fragment of second-order logic that is restricted to existential quantification captures every query computable in nondeterministic polynomial time (NP), 18 and second-order logic with transitive closure yields polynomial space (PSPACE) (Immerman, 1998); and both NP and PSPACE encompass problems that are proven to be intractable.Thus, since Brainishallegedlyis more expressive than any natural language, it seems to presuppose computational powers hard to conceive of (indeed, literally inexpressible in natural language).Be that as it may, further development and clarification of what Brainish is, its contribution to consciousness, how this contribution is effectuated both neurally and computationally would constitute extremely interesting developments of CTM, and we are encouraged by recent (yet unpublished) work in this regard (Liang, 2022).
Another area in which there is potential for CTM concerns the complexity of learning.A crucial element of CTM's "feeling of consciousness" is it's "predictive dynamics (cycles of prediction, feedback, and learning)" (p. 3) that are carried out by the LTM processors, each equipped with machine learning algorithms (p. 6).However, our best theories about learningand TCS more broadly seem unable to address the empirical success of deep learning, e.g., the behavior of large-scale generative models on general tasks (e. g., text generation).As a learning system, CTM therefore faces the same concern that is known as the "paradox of deep learning"; i.e., to understand why deep learning works so well despite the lack of theoretical explanations (Zhang, Bengio, Hardt, Recht, & Vinyals, 2021).The use of machine learning is, of course, naturally appealing given the recent advancements in AI and the track history of TCS in dealing with paradoxes,19 but it may also come at the sacrifice of the explanatory value afforded by classical TCS pillars (e.g., computability and computational complexity).
Thirdly, interesting issues pertain to the role of energy in biological brains.If complexity considerations play a major role in CTM, it raises the question: to what extent are the natural resources for computational systemstime (as state transitions) and space (as bitmemory) -relevant for explaining human consciousness?Neurophysiology tells us that (metabolic) energy is the primary resource that sustains life and underpins the cognition of all organisms; from individual cells to multi-cellular mammals with large brains.While a conscious being that is resource-rational with regards to energy certainly benefits from reducing the amount of time and memory it needs to tackle problems in its ecology (e.g., by developing more efficient strategies), putting too much explanatory emphasis on such computational resources comes at the risk of downplaying the energy cost-conservation processes that are not only detrimental to survival, but arguably in a better position to explain the internal workings of the conscious being (Tononi, Boly, Gosseries, & Laureys, 2016).
Fourthly, recasting the CTM as a unificationist framework rather than a 'stand-alone' theory could be an interesting avenue of development.In the above we have highlighted that the many appeals to elements already (and better) cashed out by other theories subverted the claims of novelty for B&B.however, if the CTM is viewed as a unificationist framework attempting to bring together extant theories this overlap with other theories would be the objective.Now, in the above we have exclusively evaluated the CTM within the scope of the claims of novelty and progress with respect to the domain of consciousness studies.We found these claims wanting.However, it is worth keeping in mind that some features of the CTM may have value outside our scope and domain. 20Starting with potential value outside of the domain of consciousness studies, the mathematical definitions of chunks offered by the B&B may be important in the domain of cognition.Similarly, the up-tree competition constitutes an intriguing approach to the selection problem, and one of us has advanced an almost identical approach to the selection problem, albeit not cached in computational terms (Kirkeby-Hinrup, 2004).Outside the scope of novelty and explanation, the compatibility of the CTM with other theories (e.g.Global Workspace Theory) can also be seen as a strength of the CTM.While it remains unclear whether the CTM will be convertible to a unificationist framework as suggested above, there is nevertheless a prima facie merit in being a minimalist model that appears to be compatible with multiple extant theories.This allows for many possible avenues of development.
To sum up, while the present evaluation has highlighted how the current case for CTM (in terms of explanatory power and progress) is overstated, we acknowledge that it is a theory in its early stages of development.More work is needed and our evaluation here should not be taken as discouragement.Therefore we have throughout this evaluation sought to highlight interesting questions and outstanding issues to offer potentially fruitful avenues for future development of the theory.In that vein, it is also worth acknowledging that the CTM is part of an emerging new area of the study of consciousness.Approaching the study of consciousness through the deployment of computational and mathematical models is an exciting new addition to the field, and being part of the vanguard in this area is a credit to the CTM in and of itself.

Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. 21 : Writingreview & editing, Writingoriginal draft, Methodology, Investigation, Conceptualization.Jakob Stenseke: Writingreview & editing, Writingoriginal draft, Methodology, Investigation.Morten S. Overgaard: Writingreview & editing, Writingoriginal draft, Methodology, Investigation, Conceptualization.