Elsevier

Neural Networks

Volume 10, Issue 7, 1 October 1997, Pages 1303-1316
Neural Networks

1997 Special Issue
Consciousness and neural cognizers: a review of some recent approaches

https://doi.org/10.1016/S0893-6080(97)00063-4Get rights and content

Abstract

This paper synthesises three diverse approaches to the study of consciousness in a description of an existing program of work in Artificial Neuroconsciousness. The three approaches are drawn from automata theory (Aleksander, 1995, Aleksander, 1996), psychology (Karmiloff-Smith, 1992; Clark & Karmiloff-Smith, 1993) and philosophy (Searle, 1992).

Previous work on bottom-level sensory-motor tasks from the program is described as a background to the current work on generating higher-level, abstract concepts which are an essential part of mental life. The entire program of work postulates automata theory as an appropriate framework for the study of cognition. It is demonstrated how both the bottom-level sensory-motor tasks and abstract knowledge representations can be tackled by a single neural state machine architecture. The resulting state space representations are then reconciled with both the psychological and philosophical theories, suggesting the appropriateness of taking an automata theory approach to consciousness.

Introduction

This paper brings together three recent approaches in the study of consciousness. A neurally based theory of artificial consciousness (Aleksander, 1995, Aleksander, 1996), a philosophy of mind based on cognitive acts (Searle, 1992) and a theory of human development based on a redescription of learned competences (Clark & Karmiloff-Smith, 1993). The purpose of this paper is to present a neural automata theory approach to consciousness, rather than describe particular machine architectures, which are liable to undergo refinement as the work progresses. Particular details of the design of the experimental work described can be found in the appropriate references.

For continuity and completeness, the paper will commence with a brief recapitulation of the salient propositions of the Artificial Consciousness Theory which are later compared with Searle's philosophy of mind (Searle, 1992). Searle insists that the study of cognition is the study of consciousness, just as the study of biology is the study of life.

[The brain's] special feature, as far as the mind is concerned, the feature in which it differs remarkably from other biological organs, is its capacity to produce and sustain all of the enormous variety of our conscious life. By consciousness I do not mean the passive subjectivity of the Cartesian tradition, but all of the forms of our conscious life-from the famous “four f's” of fighting, fleeing, feeding and fornicating, to driving cars, writing books and scratching our itches. All of the processes that we think of as especially mental—whether perception, learning, inference, decision making, problem solving, the emotions, etc.— are in one way or another crucially related to consciousness. Furthermore, all of those great features that philosophers have thought of as special to the mind are similarly dependent on consciousness: subjectivity, intentionality, rationality, free will (if there is such a thing), and mental causation. More than anything else, it is the neglect of consciousness that accounts for so much barrenness and sterility in psychology, the philosophy of mind, and cognitive science.

The study of the mind is the study of consciousness, in much the same sense that biology is the study of life. Of course, biologists do not need to be constantly thinking about life, and indeed, most writings on biology need not even make use of the concept of life (Searle, 1992).

So, taking Searle's argument that the study of mind is the study of consciousness, this paper sets out to examine neural network models of mind. Having defined the position on consciousness adhered to and claimed that the study of mind is the study of consciousness, all that remains is to define the stance taken on mind in this paper. The mind is assumed to be directly responsible for cognition. Clark & Karmiloff-Smith (1993)make a powerful distinction between complex information processors, which includes computers, and genuine cognizers.

The sea slug and the VAX mainframe are both effective processors of information. Yet it is only human beings, and perhaps some higher animals, who are credited with genuine thoughts. Is this mere prejudice, or have we somehow latched on to a genuine joint in the natural order? If it is a genuine joint in nature, what feature of set of features mark it?

The hypothesis to be considered is that there is indeed a joint in the natural order such that humans fall on one side and many other systems (including some quite sophisticated information processors) fall on the other. The joint, we argue, marks a pivotal difference in internal organisation. The representational redescription model embodies specific hypothesis about the nature of this joint (Karmiloff-Smith, 1979a; Karmiloff-Smith, 1979b; Karmiloff-Smith, 1986; Karmiloff-Smith, 1990, Karmiloff-Smith, 1992). For genuine thinkers, we submit, are endowed with an internal organisation which is geared to the repeated redescription of its own stored knowledge. This organisation is one in which information already stored in an organism's special-purpose responses to the environment is subsequently made available, by the RR process, to serve a much wider variety of ends. Thus knowledge that is initially embedded in special purpose effective procedures subsequently becomes a data structure available to other parts of the system (Clark & Karmiloff-Smith, 1993).

There has been some discussion about the boundaries between congizers and non-cognizers (Aleksander, 1996) which suggests that animals and even machines could redescribe their knowledge. Apart from noting this slight objection, the general principle that redescription is an essential feature of cognition is not at stake, there is only a slight difference in opinion over the position and abruptness of the divide.

In the search for artificial consciousness, the challenge, therefore, is to bestow a neural network with the ability to re-represent its own internal states. Previous work (Browne & Parfitt, 1997) suggests that such a system might well add further weight to the numerous refutations (Smolensky, 1987; Pollack, 1990; Van Gelder, 1990; Aleksander & Morton, 1993) of Fodor and Pylyshyn's attack on connectionist models of cognition (Fodor & Pylyshyn, 1988). The previous work further suggests that recursive redescription of system representations might provide a possible mechanism by which a pseudo-symbolic system, which many agree is the basis of cognitive function, could arise in a connectionist network. The general model of emergent hierarchical data structures presented here also has much in common with Harnad's proposals of the recursive grounding of language (Harnad, 1992).

Before the redescriptive process can be discussed in detail, the type of neural architecture and representational form being proposed must be described. Karmiloff-Smith's representational redescription model (Karmiloff-Smith, 1992) is then presented in detail, moving to a suggested neural architecture with the capacity for spontaneous re-representation of its own internal states. The paper concludes with a discussion of the relationship between Searle's philosophy, the Fundamental Postulate (Section 2.2) of the Artificial Consciousness Theory (Aleksander, 1996) and the proposed cognitive neural architecture which emphasises the importance of examining the mental processes occurring in the system as conscious in some form—the authors would stress an artificial, rather than biological form.

Section snippets

Neural state machine models of cognition

For over 20 years, one of the authors (I.A.) has been suggesting that the capacity for thought can be encapsulated in a machine with an adaptable state structure (Aleksander & Hanna, 1975; Aleksander, 1996). An adaptable state machine can be implemented in a machine with some form of learning capacity. The contemporary versions of such machines have been dubbed as “Neural State Machine Models (NSMMs)” (Aleksander & Morton, 1993); the Multi-Automata General Neural Unit System (MAGNUS) (

Representational redescription

The tasks presented so far have all concerned bottom-level, sensory-motor processing. Earlier, we suggested that cognition only arose through a process of redescription of lower-level representations into higher-level ones. That process is now described in detail.

The representational redescription (RR) model has been developed by Karmiloff-Smith over a period of time. A comprehensive description is given in Karmiloff-Smith (1992). At the heart of the RR model is an internal process of

Internally driven redescription in a single neural architecture

The task of building a single neural architecture with an internally driven redescriptive mechanism can now be split into two questions. (1) Given the explanatory power of the suggested complex state approach to redescription, what form of internal representation might replace the externally provided class nouns? (2) How could a single neural architecture derive the appropriate representations?

Comparison with other approaches

It is certainly not intended to claim in this paper that automata theory is the only approach to understanding consciousness. The aim of this discussion is to argue that the described automata based theory does provide a useful theoretical framework for exploring consciousness, or at least the particular aspects focussed on here; redescription and abstraction. It is, therefore, important to note the essential features of iconically trained neural state machines that differentiates them from

Conclusions: where does consciousness come in?

A fair critique of the above would be that all that has been suggested is method for learning to represent worlds containing nested concepts, so why have a fanciful reference to the difficult and woolly concept of consciousness? In fact, why refer to a program of work which goes under the heading of Artificial Consciousness at all? The answer lies in the fundamental postulate and therefore its associated corollaries. A neural state machine has been shown to satisfy the requirements of

References (47)

  • Aleksander, I. & Morton, H. (1993). Neurons and symbols. Chapman and...
  • Asprey, W. (1990). John von Neumann and the origins of modern computing. MIT...
  • E.L. Bienenstock et al.

    Theory for the development of neuron selectivity: Orientation specifity and binocular interaction in the visual cortex

    Journal of Neuroscience

    (1982)
  • C. Browne et al.

    Digital general neural units with controlled transition probabilities

    Electronics Letters

    (1996)
  • Browne, C.J. & Parfitt, S. (1997). Iconic learning and epistemology. In Does representation need reality? Proceedings...
  • A. Clark et al.

    The cognizer's innards

    Mind and Language

    (1993)
  • Crick, F. (1994). The astonishing hypothesis. New York:...
  • Dennett, D.C. (1991). Consciousness explained. Penguin...
  • DesCartes, R. (1637). Discourse on...
  • Evans, R.G. (1996). A neural architecture for a visual exploratory system. PhD thesis, Imperial College of Science,...
  • Z.W. Fodor et al.

    Connectionism and cognitive architecture: A critical analysis

    Cognition

    (1988)
  • Greenfield, S. (1995). Journeys to the centres of the mind. New York:...
  • Harnad, S. (1982). Metaphor and mental duality. Hillsdale, NJ: Lawrence Earlbaum...
  • Cited by (5)

    View full text