skip to main content
research-article
Open Access

Computational Thinking and Notional Machines: The Missing Link

Published:11 December 2023Publication History

Skip Abstract Section

Abstract

In learning to program and understanding how a programming language controls a computer, learners develop both insights and misconceptions whilst their mental models are gradually refined. It is important that the learner is able to distinguish the different elements and roles of a computer (compiler, interpreter, memory, etc.), which novice programmers may find difficult to comprehend. Forming accurate mental models is one of the potential sources of difficulty inextricably linked to mastering computing concepts and processes, and for learning computer programming.

It is common to use some form of representation (e.g., an abstract machine or a Computational Agent (CA)) to support technical or pedagogic explanations. The Notional Machine (NM) is a pedagogical device that entails one or more computational concepts, originally described as an idealised computer operating with the constructs of a particular programming language. It can be used to support specific or general learning goals and will typically have some concrete representation that can be referred to. Computational Thinking (CT), which is defined as a way of thinking that is used for [computational] problem solving, is often presented as using a CA to carry out information processing presented by a solution.

In CT, where the typical goal is to produce an algorithm or a computer program, the CA seemingly serves a purpose very similar to an NM. Although it changes through the different stages of development (of the learner and of the curriculum), the roles of CAs and NMs can be seen as versatile tools that connect a learner’s mental model with the conceptual model of a program. In this article, we look at this relationship between CAs and NMs, and indicate how they would look at different stages of learning. We traverse the range of definitions and usages of these concepts, and articulate models that clarify how these are viewed in the literature. This includes exploring the nature of machines and agents, and how historical views of these relate to modern pedagogy for computation. We argue that the CA can be seen as an abstract, simplified variant of an NM that provides a useful perspective to the learner to support them to form robust mental models of NMs more efficiently and effectively. We propose that teaching programming should make use of the idea of a CA at different stages of learning, as a link that connects a learner’s mental model to a full NM.

Skip 1INTRODUCTION Section

1 INTRODUCTION

This article discusses the implications of two representational concepts that are commonly used in computing and programming education to support learners in developing mental models and conceptualisation, namely the Notional Machine (NM) and the Computational Agent (CA) in Computational Thinking (CT). CT is a concept that reflects the need for problem solving by designing and analysing algorithmic processes for computational systems (detailed in Section 2), and it often refers to an “information-processing agent” or a “CA” that performs the computation. The NM, which was first introduced by Du Boulay et al. [23] (detailed in Section 3), is “an abstract computer responsible for executing programs of a particular kind” [54]. Clearly the CA and NM are closely related, and both are used for thinking about pedagogy, but their purpose is subtly different. In this article, we explore the similarities and differences, and also navigate the range of definitions that have been used for these two concepts.

In a programming learning experience, having the learner’s mental model of program execution be consistent with a rich NM would be the ideal scenario of success but is very difficult to achieve. Both the CA and NM concepts attempt to support the learner with a representational and interactive perspective to support their conceptualisation and mental model development. Often in programming education, both these idealisations are facilitated either on a physical computer (e.g., visualisation software) in a “plugged-in learning environment”, on a deterministic device (e.g., a Bee-Bot or Sphero), or with the aid of a fellow learner in “unplugged” learning.

Learners develop understandings of how the programming language can control a computer when they learn to program. When new knowledge or new understandings occur, some form of a mental model is inherently formed in a learner and is mentally interacted with. In the process, both insights and misconceptions can arise as the learner’s mental model of how the system works is gradually refined. Education regularly uses simplified models (e.g., the Rutherford-Bohr model of the atom), analogies and representations to help the learner to develop good mental models, and this is inevitably necessary when teaching computing as well. Here we mean “mental model” in the sense originally defined by Johnson-Laird [32], a cognitive construct which is a functional (but simplified) model of some aspect of the world, with “the same ‘relation structure’ as the phenomenon that it represents”. These models shape perception, attention and behaviour, and can be used in reasoning and problem solving.

In teaching programming, the NM is not necessarily taught or introduced to learners explicitly—students would be taught elements of the NM but are unlikely to be told that they are learning an “NM”. Rather, it often stays as an abstract model of a computer created by the teacher in the context of teaching. It is used as a pedagogical device that helps the learners understand, which represents something that learners can interact with (often mentally), whereas teachers use analogies that provide scaffolding to help understanding [25]. In this context, when introducing students to computing via programming, attention to NMs in education research becomes essential, whether it is implicit or explicit in the teachers’ or learners’ thinking, especially when engaging them with the runtime dynamics of computer programs. However, CT, which is a concept adopted by many school curricula around the world, uses the CA to perform computation in ways which are constrained by an explicit or implicit NM. This indicates a relationship between the CA and the NM that is worth closer investigation.

Computational problem solving involves exercising CT to transfer ideas into the design and analysis of algorithmic processes for computational systems, and/or turning them into computer programs. These thought processes can start from early childhood, and mental models can form with or without the presence of an actual computer or the aid of representational tools like CAs or NMs. If Ada Lovelace could program (with trace tables) for Babbage’s analytical engine without even seeing it, was she thinking computationally? What was the CA in her CT? What was her mental model like, and how did she interact with it? Did Lovelace develop a model of an NM? Dijkstra [22] advocated that the ability to prove the correctness of a program is more straight-forward than testing it as an implementation (for a number of reasonable test cases), which is an extreme example of developing an NM without having the CA available! Bower and Falkner [13] show that accurate NMs underpin successful performance in CT and suggest understanding NMs as a prerequisite for effective teaching of computing.

Considering the notable reference to CT in school curricula, investigating the relationship between CT concepts and NMs can give broad insights to how these two concepts relate to and complement each other in the teaching and learning of introductory programming, specially at grade school level. The inclusion of CT with a focus on its original definition (see Section 2) can hide the importance of the NM in the curriculum context, partly because the CA is generally not well defined (in fact there has even been debate about what the agent can be [18]), and also because the link has rarely been made explicitly. Reflecting on how the two concepts, CT and NM, have often been referred to in various contexts of learning computing as well as programming, we believe that CAs and NMs are used by educators differently at different stages of learning, in changing roles and/or in evolving contexts.

In this article, we look at the distinctions between CAs and NMs in computing, and provide insights into how those distinctions may and may not be helpful in the teaching and learning of introductory programming.

Skip 2COMPUTATIONAL THINKING Section

2 COMPUTATIONAL THINKING

Wing [56] brought CT to the fore in her 2006 article and later clarified the definition as follows:

“[T]he thought processes involved in formulating problems and their solutions, so that the solutions are represented in a form that can be effectively carried out by an information-processing agent” [57].

We will use this as a working definition for CT, although in the discussion we will be touching on other definitions that appear in the literature. In this definition, Wing suggests that CT is a way that humans can think about solving problems that incorporates the set of mental tools used in computer science. Referring to this early definition, CT is explained in much simpler terms by Wing [56] as “thinking like a computer scientist to solve problems”, a view that has been picked up by others later [16]. CT is the term often used to denote the conceptual core of computer science [16] and an approach to problem solving that consolidates logic skills with core computer science concepts [45]. Denning and Tedre [20] define CT as the mental skills and practices used for two purposes: (1) designing computations that get computers to do what is asked, and (2) explaining and interpreting the world as a complex of information processes.

This raises two questions. What is this special way a computer scientist thinks to solve a problem? And how does their way of thinking differ from others? Curzon et al. [18] survey many definitions and discussions about CT, to find the common themes. From this, they suggest that CT is about developing systems that involve information processing, and it is the focus on algorithmic solutions based on computation that differentiates it from other problem-solving approaches. CT has been touted as a fundamental skill for everyone, not just computer scientists [56], and there are good arguments for this based on students understanding the digital world that they live in; some authors take this further and suggest that CT skills can be useful in everyday situations, such as decomposing large problems into small ones [8, 31, 52], although this risks making the concepts too broad to be meaningful [19, 35, 48]. Denning and Tedre [20] also point out that CT for a beginner differs from that for a professional: the former is a simple, practical understanding of computing concepts, and the latter is critical, complex and technical.

Figure 1 collects some common views of CT and their relationship to the actual physical machine (i.e., computer). The views towards the left of the spectrum indicate a stronger lean to the physical device in explaining the concept, whereas towards the right, the views relate less to the device but concentrate more towards mental comprehension of computation, or relating computation to more generic concepts. In views where CT is seen as leaning extremely towards computation (extreme left of the spectrum), the CA is necessarily a computer and therefore precision of instructions is essential. In an extreme generic view of CT (extreme right), a human with a set of loose instructions and incorporating their own judgement can also be allowed as a CA (e.g., a cook following a cookbook recipe). However, the CT’s applicability to computation in such a generic extreme is vague and doubtful, because computation (at present) is digital, and relies on numeric calculations and symbol manipulation. The dotted lines indicate the positioning of different views in this spectrum but are not firm lines in the continuum.

Fig. 1.

Fig. 1. Variations of CT definitions in relation to a physical computer.

Many objective thoughts that humans have involving a sequential flow can be misunderstood as CT (a well-organised daily work routine, tidying up a closet, stacking books by size, etc.). However, only the portion of thought processes in rational thinking with a goal of producing an algorithm or a computer program/computing system should be seen as “CT”; the latter solves problems in general, whereas the former is a one-off solution. Such thought processes are often seen as closely related to “problem solving”; they can be defined rather clearly when they are expressed in relation to information processing, and can be easily implemented using a computer if they are expressed in the form of a set of instructions to a “CA” for processing. As such, the CA is more strongly relatable to computation, to programming and to the physical device. In this writing, we are more interested in CT where the typical goal is to produce an algorithm or a computer program rather than the broader and generic perspective that is sometimes used.

Curzon et al. [18] also point out that there is a common view of CT, as a way of thinking that is used to develop solutions in a form that ultimately allows ‘information processing’ or ‘computational’ agents to execute those solutions [18]. Views on CT can differ based on what the CA can be, but there is a common ground that it is not simply solving a problem to provide a solution, but solving it in a way that directs an “agent” to arrive at the solution by following instructions. This view implies that the problem solver need not be the executor of the solution themselves, but positions them to look at the problem scenario from an external perspective in which they can oversee how a CA (oftentimes, the computer) should be instructed to solve it. As shown in Figure 1, a CA may or may not be tightly coupled to a computer, depending on how they are used in a learning context (it might be a human following instructions literally). It enables novice programming learners to primarily realise that the CA will merely be the device that does the job efficiently, and not think for itself or on the programmer’s behalf. Denning and Tedre [20] point out that the efficiency is a key feature of computers that makes them the obvious CA to use: “The magic [of computers] is nothing more than a machine executing large numbers of very simple computations very fast”. Aho [6] points out that an important part of this process is finding appropriate models of computation with which to formulate the problem and derive its solutions. The capability of the CA is a key consideration when discussing CT!

Skip 3NOTIONAL MACHINES Section

3 NOTIONAL MACHINES

The NM is a pedagogical device that entails one or more computational concepts. It can be used to support specific or general learning goals and will typically have some concrete representation that can be referred to. This definition is necessarily general because the concept has had different interpretations over time, as this article seeks to explore.

Du Boulay et al. [23] introduced the concept of the NM, defining it as follows:

“[A]n idealised, conceptual computer whose properties are implied by the constructs in the programming language employed” .

The pedagogical rationale is that the mastery of every detail of how a computer works is not essential to understand the dynamics of the programs at runtime. NMs essentially help learners understand the transactions and side effects of usually invisible processes that the instructions of their program cause inside the computer.

Robins et al. [47], summarising the view of du Boulay et al. [23], present the NM as a “model of a computer as it relates to executing programs”. According to Berry and Kölling [9], an NM must be able to explain all observable behaviour of the real machine (registers, fetch cycles, memory, etc.), and reasoning about it must allow accurate predictions to be made about behaviour of the real machine. Du Boulay et al. [23] highlight that experienced programmers may find special languages designed for pedagogical purposes with simple NMs as “distasteful”, yet languages intended for teaching take into account different human factors compared to languages being used by professional programmers.

The purpose of NMs is mostly pedagogic, to draw the learner’s attention to hidden aspects of programming or computing [23, 25, 47, 54]. Some authors have tried to bring the NM towards a more conceptual view, where they have attempted to relate it to a person’s mental modelling about computing, not limiting it to a programming language or a physical device. Fincher et al. [25], discussing the relationship among conceptual models, NMs and mental models, suggest that teachers create NMs as pedagogical devices to help learners understand a (particular) conceptual model (i.e., to form their own mental model). They describe an NM that has a generic purpose (i.e., pedagogic) and generic function (i.e., uncover hidden aspects of programming or computing), and a particular focus (i.e., specific aspect of a program and their behavior) and particular representation that highlights the focus. A learner’s mental model (about computing or programming) is often incomplete, erroneous, lacks firm boundaries and is changing, and an NM provides a bridge that abstracts a detailed and precise conceptual model, often using analogies that provide scaffolding to help refine the learner’s mental model. Thus, this alternate definition describes an NM as a special kind of abstract model that supports conceptualisation, but may not be as declarative as a conceptual model, yet is representing something that can be interacted with (often mentally) [25].

Sorva [54] agrees that some form of an NM is present when learners begin to program, and that an NM is implied by both the programming language and the paradigm used in learning. He suggests that several NMs at different levels of abstraction may be used to describe the execution of a single program. Krishnamurthi and Fisler [33] define it as a “human-friendly abstraction” that explains program behavior of a “given language or family of closely related languages”.

Guzdial et al. [27] recognise the need for a distinction between a general description of behavior, independent of a specific program, and making the behavior of a specific program explicit, suggesting that the learners may need to shift between the two. In that case, an NM can be seen as an evolving abstraction of an execution environment, for comprehension and learning purposes. However, it should also be sufficiently simple to learn, yet sufficiently comprehensive to solve the problem of interest [13, 23, 25], and should convey descriptions of semantic behaviour [21]. An NM in such a context is loosely coupled with a programming language, does not necessarily relate to a physical device (i.e. computer) for explanations, and may range between a physical machine and conceptual machine in describing a computing concept. An NM may not effectively give an observational (external) perspective as it is not explicitly represented in the task like a CA, especially when it is explained through programming constructs. The learner is expected to be “aware” (at least partially) of the presence of an NM through constructs explained in a lesson, as opposed to a CA that is explicitly represented for them either tangibly or intangibly. An NM essentially explains the operational (internal to the CA) perspective, in that it helps learners comprehend a concept better by (mentally) interacting with it.

NMs can exist even if the real machine does not. For example, it is a popular yet believable misconception that Charles Babbage designed the analytical engine, and Ada Lovelace “programmed” for it even though programming languages did not yet exist.1 But to do so, she would have needed to develop her own version of an NM that enabled her to predict what would be produced by a machine that she never got to see. Likewise, a fault tolerant quantum computer may not have yet been constructed, but algorithms have been designed for it by people who have an NM for this machine that may not ever exist! Nevertheless, what could the NM for a quantum computer look like, given that it involves operations that cannot be described with non-quantum physics? Is it possible to have NM for quantum computing without an in-depth education in quantum physics? Can a useful approximation that is as formal as an NM exist that does not require understanding quantum physics?

Another computational system worth considering is an AI system, which is typically based on probabilistic operations that may not be possible to understand in detail, and can lead to outcomes that might be correct, but cannot easily be explained in terms of the computation or decisions that led to them. To the extent that such a system can be thought of as implementing computational processes, what would a potential NM look like?

A possible way to describe an NM for advanced computer systems like quantum computing or AI could be to define certain fundamental operations expected of such a system without otherwise explaining those operations with the same level of detail. For example, in a quantum computer, “find every possible divisor of a number” could be defined as a single, unitary operation, never having to describe it as a sequence of quantum operations. Similarly, in an NM for AI with such abstract basic operations, accepting that some of them will be probabilistic is also reasonable. Thus, defining NMs for advanced computer systems becomes possible by allowing fundamental operations to be at a far more abstract level than that of conventional computing. A teacher can decide the level of abstraction of an NM according to the pedagogic needs they intend to fulfill.

Figure 2 positions these different definitions/explanations in a spectrum with regard to their emphasis in relating the NM to the physical device/programming language. Duran et al. [24] provide a range of uses and implied definitions of NMs, which specifically supports this claim. Early definitions (i.e., [23, 47]) are strongly coupled to the physical end of the spectrum. More recent definitions such as those of Fincher et al. [25] lean towards conceptualisation of computation, where the NM is seen as a conceptual device that connects mental and conceptual models about computing and loosely relates to a physical device/programming language. Sorva [54] proposes different levels of abstraction, potentially spanning the spectrum. The dotted lines in Figure 2 are indicators of the positioning of different views in this space but are not firm divisions in the continuum.

Fig. 2.

Fig. 2. Variations of NM definitions in relation to a physical device and to the conceptual landscape.

Many discussions of NMs agree that there should exist a general NM that is a full computer, but no one has ever really defined it [27]. Work has focused on an NM for a specific programming language or context, because a general NM is too nebulous to the current understanding within the community [24]. Accordingly, for the benefit of the discussions regarding NMs here, our general understanding of an NM would be that it has to correctly predict the outcome of a given program (and does not need to accurately represent what is happening with the physical computer). An NM should allow prediction of program outcome for any situation that will be seen within a given context, and the term “NM” is casually used to reflect this understanding. Reflecting this range of views in the literature, the following distinct definitions for NMs are used at different points, when necessary:

NMphysical:

NM that explains all observable behaviour of the real machine, and supports accurately predicting the behaviour of the real machine, as explained in the work of du Boulay et al. [23] or Robins et al. [47] (far left of the continuum in Figure 2).

NMgeneric:

NM that is implied by both the programming language and the paradigm used in learning, as explained in the work of Krishnamurthi and Fisler [33] and Sorva [54], or implied by Nelson et al. [42] (mid region of the continuum in Figure 2).

NMconceptual:

NM that attempts to relate to a person’s mental modelling about computing, and not limiting itself to a programming language or a physical device, as explained in the work of Dickson et al. [21] or Fincher et al. [25] (far right of the continuum in Figure 2).

All three of the preceding specific definitions also fit into the previously mentioned general understanding but are used distinctively at the points where we have to be specific.

Despite its many useful pedagogic implications and applications, the true mechanisms of an NM may not be obvious for a learner, or indeed to teachers who are new to computing. As explained by Fincher et al. [25], NMs are created by teachers as a pedagogic device that help learners understand concepts that are intended to be communicated through a particular learning experience, most likely programming. NMs are implicitly defined by many visualisations of program execution [53], computationally or otherwise. Educators make use of a vast variety of teaching tools to convey an NM and to establish accurate mental models. These range from analogies and metaphors in the language they communicate, including physical artifacts, graphics and diagrams, to sophisticated software visualisation tools.

Novice programmers develop different kinds of intuitive mental representations (mental models), often initially based on superficial language features and analogies [47, 54]. These mental models may not be as accurate or as efficient or effective as those held by an experienced practitioner, but an NM offered by the teacher helps these mental models mature towards more complex, generally accepted technical conceptual understanding [25]. Learners build their mental models through the teaching and learning methods and materials used in programming courses. In many cases of learning programming, this is facilitated on a physical computer (e.g., a visualisation tool, simulator, integrated development environment or debugger). In an unplugged style learning environment that purposely avoids the use of computers, the instructions of a simple programming language can be executed by a fellow learner. There are also cases in between where a deterministic device such as a Bee-Bot executes programs based on a very limited range of instructions. Educators may use something similar to an NM but without explicitly calling it that, or not knowing that it may be a form of one. Du Boulay et al. [23] conclude that matching the visibility and simplicity of components of NMs to novice learners [mental model] at different stages of learning leads to improved educational outcomes.

Skip 4THE NATURE OF MACHINES AND AGENTS Section

4 THE NATURE OF MACHINES AND AGENTS

Both ‘machines’ and ‘agents’ have been construed in various ways and contexts over time; a brief diversion into this background provides context for the discussion that follows. These ideas are discussed in more detail in the work of Munasinghe [38].

Aristotelian writings of “automata” and “automation” describe an “engine of repetition” that “transforms one input into a motion of a different kind” [10] (which could be seen as an output in a computing context). The term “mechanistic” can be taken to refer to the method of investigating the natural world using the terms and principles of mechanics and “mechanistic conception” which is their application to understanding the natural world. Accordingly, conceptual models in computing could sometimes be presented as machines.

An agent is “a means or instrument by which a guiding intelligence achieves a result” [4]. The literature on agents often turns to Aristotle’s distinction between agency and animacy, distinguishing two kinds of instruments, animate instruments (meant for action) and inanimate instruments (meant for production, e.g., a hammer or weaving shuttle) [7]. Animate instruments are initiators of their own actions, and inanimate instruments are fundamentally passive.

By referring to it with the name NM, computer scientists have attempted to give a mechanical sense to conceptualising the unseen processes within the computer. Both CAs and NMs can be seen as animate instruments, as they both focus on the execution of instructions, regardless of their form being conceptual or actual, or their involvement being tracing (predicting) or actually executing the instructions of a program.

4.1 The Turing Machine

A Turing Machine is a hypothetical machine that has been used to characterise what computation is, due to the fact that it can implement any computational algorithm [15]. It is used as a tool to reason about the limits of computation, and thus highlights the transition of the term “machine” from a physical device to an abstract concept that is only used as a thought experiment.

If a programming language can be used to simulate any Turing Machine, it can be described as Turing-complete. The structured program theorem relates Turing-completeness to conventional programming languages, establishing that only three control structures (sequence, selection and iteration) are needed for a language to be Turing-complete, and so establishes that most conventional languages are Turing-complete [11].

Not all educational languages are Turing-complete; some languages, tools and activities are not (e.g., Scratch Junior, Kidbots and Bee-Bots), as they focus primarily on sequence, omitting selection and/or repetition. Consequently, not all NMs used in education need to be capable of supporting Turing-completeness, although they may still be useful in relating programming language commands to actual execution.

Fig. 3.

Fig. 3. Variation of CA in CT’s dependency to a physical device.

4.2 CAs in CT

The “information processing” or “computational” agent is central to the definition of CT. Although the early CT definition by Wing [56] does not specify the nature of the CA, it has later been suggested that a CA can be either human or machine, as long as it carries out instructions exactly as they are (precisely) and without judgement (blindly) [18]. Thus, the sequence of instructions being executed determines its behaviour completely. Whatever the degree of knowledge an individual learner has about the internal mechanisms of the CA, these mechanisms are in principle knowable.

The CA’s mechanistic nature in acting on instructions highlights the need for precise, step-by-step instructions in arriving at a solution to a problem, and implies a few key ideas when a human is developing such instructions:

(1)

algorithmic thinking and logical thinking in problem solving,

(2)

modelling or conceptualising (like abstraction, generalisation, decomposition) in forming the solution, and

(3)

attention to detail and precision in the (language used for) instructions in presenting the solution (to the CA).

A CA should be simple enough to help “show” a teacher or learner what is going on in the problem solving and/or solution generation space. This enables the problem solver to visualise how the “instructions to achieve the solution” (i.e., the program) work, much like the original definition of the NM [23]. In a learning environment, therefore, understanding the CA helps to build good NMs.

Accordingly, much like CT or NM definitions (Figures 1 and 2, respectively), the CA’s position also can be seen as varying in a spectrum in its relatedness to a physical device and abstraction of concepts. The closer the CA is to a physical device, the more explicit its structure and properties are; the more abstract the CA is, the less well defined they are, as illustrated in Figure 3.

Once a computational solution has been generated, a CA plays the role of the “executor” of the formulated instructions mechanically, thus naturally positioning itself as a good aid in debugging/tracing.

Skip 5COMPUTATIONAL AGENTS: THE MISSING LINK Section

5 COMPUTATIONAL AGENTS: THE MISSING LINK

Writing and reading programs requires mental representations of problem-solving strategies, familiarity with common programming patterns, the problem domain, a model of the algorithm being implemented, and the computer that executes the programs [17, 53]. The broader lens of CT is a good way to approach computer programming, as it can avoid a beginner programmer perceiving that learning to program is merely learning to write program text according to the syntactic rules of a particular programming language. It encourages the beginner programmer to develop an overarching view of problem solving, and to develop a step-wise solution that can later result in a computer program. Conversely, programming a computer enforces the limits of computation, so not only is teaching CT a good foundation for teaching programming, but learning to program is a key way to engage students with CT. Therefore, it is not surprising that teaching programming is a widespread methodology for teaching CT. Computer programming relies directly on skills to develop systems involving information processing, with the need to focus on algorithmic solutions, and for that reason it differentiates itself from other problem-solving approaches and resonates with CT [18].

When learning to program, it is crucial to distinguish between the different roles a computer can play [23] (i.e., the machine that executes the program, the development environment that manages reading, storing and editing of program code, and the system that supports and applies the programming language itself). This can be difficult for the novice to understand, because a language itself is implemented as a computer program. Furthermore, the computer’s nature of strictly following instructions can be overshadowed by the expectation that it will do what is expected rather than what is told. The need to have a “model of the computer as it relates to executing programs” [12, 13] is equally important in both teaching and learning to program. This modelling—in other words, understanding an NM—is one of the inextricably linked potential sources of difficulty when mastering computing concepts and processes, and in learning computer programming [12, 13].

CT, by definition, helps learners realise this distinction by introducing the CA (human or machine) as an “executor” that precisely and blindly follows instructions, thereby diminishing any particular attention given to the computer itself. That the precise set of instructions for a CA (human or computer) is available and knowable can be useful for explaining abstract concepts of computing. This enables educators to offer clearer NMs, as well as enabling learners to form much closer mental models. However, an NM for early teaching may be substantially simplified and the full model hidden from the learner. This may cause some teachers to feel uneasy, as if they are teaching something that is incorrect. Nevertheless, incompleteness should not be confused with incorrectness [21]. A similar point is captured in “Lies-to-children” by Pratchett et al. [44], the idea that we need to start with simpler models to avoid overwhelming learners, and this incompleteness can appear to be incorrectness. Education regularly uses simplified models, such as the Rutherford-Bohr model of the atom (rather than quantum mechanics) or Newtonian physics (ignoring relativity), and this is inevitably necessary when teaching CT as well.

Fig. 4.

Fig. 4. Wider range of abstraction in NMs and a CA’s position as an aid in relating them to mental models.

Figure 4 provides a comparison view of CA and NM and how they fall within a spectrum of relatedness to the physical device and conceptualisation (mental modelling). CAs are the key element in CT definitions that link these two extremes (see Section 2). Various definitions of an NM, however, range across a similar spectrum as explained in Section 3. NMs as pedagogic devices attempt to convey operational aspects of computing to the learner (and relate that to their mental model) in a quite similar manner that a CA helps them to design and execute “sets of instructions” that should solve problems using computation. They both succeed in “representation”; an NM is often less obvious (as well as not discussed explicitly) in teaching; and a CA is far more tangible and easy to use as a teaching aid/tool.

Reflecting on how Lovelace might have arranged her thoughts to program for a machine she had never seen, or how the early “Human Computers” wrote instructions for a physical computer, it becomes clear that CT happens even when the CA is imaginary. In the two examples of possible NMs for quantum computing and AI discussed earlier, the CA is well defined, an NM can be used to learn about it, and yet the learner may never get to see the CA in action. Consequently, an NM with a high level of abstraction may also be a way to describe using a non-quantum computing or a non-AI CA, thereby making it mentally comprehensible to a learner. The unifying thread here is that, in learning to program, the roles of the CA and NM change at different stages of development (of the learner and of the curriculum) yet are very closely related. This also rationalises their variation in positioning themselves in a spectrum as shown in Figure 4.

In early introductory programming, it is easier to introduce the NM to a learner by trying to establish the fact that they program for a CA to execute the instructions given in their program. Wing’s definition of CT (see Section 2) encourages the CA to be situated in a more abstract as well as simpler position than a well-defined NM, simple enough to be understood by the learner yet with sufficiently detailed instructions to draw the learner’s attention to things that are crucial yet invisible in the program. However, the behavioral limitations of CA in CT (that it should operate blindly) obscures this CA’s relatedness to an NM. Had the CT literature adopted an NM, it would have had more clarity about the ultimate goal of CT, and added more richness and clarity to what computation is. Nevertheless, CA as explained by Wing’s CT definition facilitates the learning experience by positioning itself as an abstract yet a more relatable learning tool for learners by being more representative and reachable than an NM (see Figure 4).

CT involves understanding and using algorithms. A typical definition of an algorithm is “a finite sequence of rigorous well-defined instructions, typically used to solve a class of specific problems or to perform a computation” [1], and most definitions of an algorithm specify a combination of unambiguous and executable steps (or instructions). To use a CA, one should understand the sort of computation it can do, which in other words is understanding its NM. The criterion of a “finite sequence of rigorous well-defined instructions” means that they are relative to the processing agent (i.e., the CA); the instructions that a child can execute are different from the instructions a computer can execute, and these are different from those for a quantum computer. Implementing algorithms means writing code that will run on a processor, and so involves writing them for an NM. Thus, every CA necessarily implies an NM. As such, a CA and NM are inseparably linked by definition, even though they are not the same thing.

Considering the range of definitions available for both CA and NM, finding a single and simple relationship to explain the link between the two concepts is challenging. They are thematically linked and can simultaneously exist and have the same or different referents in different learning contexts. They both play similar pedagogical roles but may have different properties at different stages of learning; a CA can substitute as an NM in the early stages of learning where an explicit NM is not needed.

Based on the arguments presented in this article, we can note that the most common relationship is that the CA can be seen as an NM that is simplified, and students’ mental models may continue to develop with their support. This changing NM provides an observational (external to the CA) perspective as well as operational (internal to the CA) perspective to the learner to support them, mostly at the initial stages of learning and problem solving, to form a robust connection between the NM and the learner’s mental model more efficiently and effectively.

We recognise the delicate nature of both the NM and CA in supporting abstraction through their range of definitions that can sometimes coincide. We propose that teaching programming should make use of the idea of CA as a very useful link to connect a learner’s mental model to the full NM.

To help elucidate this idea, we explore the role of a CA using three examples: (1) an unplugged style programming activity that uses a human as a CA, (2) two different perspectives of a computer as a CA: using the Scratch programming environment and a PythonTutor simulation tool, and (3) a comparison of two flowcharts from the perspective of a CA (human and computer), in Sections 5.1, 5.2, and 5.3, respectively. In all three examples, the CA of interest is seen in the perspective of the CT definition and discussed alongside an NM that the program(s) in each example intends to establish. Using these examples, we explore the relationship of CA and NM at different stages of CT and learning to program.

The CA provides a sense of “tangibleness” to a learner’s mental model that should comply with an NM, which is intended to be established cognitively, leading to a more relatable conceptual relationship between the two. According to the view of Sorva [54] of several NMs existing at different levels of abstraction in programming, a suitable CA at different stages of learning can provide a very useful link that meaningfully connects a learner’s mental model and NM. CAs are simplified NMs, in that they are NMs made accessible to novice learners (as appropriate for teaching CT), and therefore teaching programming should make use of them as a very useful link to connect a learner’s mental model to the full NM.

5.1 Example 1: An Unplugged Style Programming Activity That Uses a Human as a CA

Unplugged activities are popular with learners and teachers as a pedagogical approach to introductory computer programming, and used appropriately, they can help improve students’ self-efficacy without adding to the learning time [29, 37, 40, 49]. Designed to facilitate kinaesthetic learning, unplugged style activities (e.g., CS Unplugged [3], Code.org [2]) are non-computer based and have consistently been suggested as a successful method for teaching CT and introducing programming concepts.

Kidbots [5] is an unplugged programming learning activity that involves three students: a ‘programmer’, ‘tester’ and a ‘bot’. The developer has to write a program using only three instructions (move forward: F, turn left: L, and turn right: R) to guide the bot from location A to B on a grid (typically using objects with a story behind them, such as getting a stuffed toy to a desired destination). In the classroom setting, this grid can be drawn on the ground, and the student acting as the bot physically moves from one square to another adjacent one.

The programming concept that the Kidbot activity focuses on is sequence. A key element is that the bot is not allowed to move until all the instructions are written in the program; this simulates writing a program and then testing it, and forces the programmer to reason about the instructions in advance. After the program is written, the tester reads out the instructions; having a tester prevents the programmer from making corrections on the fly while the program is executing. The bot has to follow the instructions read out by the tester precisely and blindly to move around on the grid.

Figure 5 shows a Kidbot layout with students playing the roles of programmer, bot and tester, and a toy pineapple as the target. The bot moves around the grid, following instructions to (hopefully) get to the target location. Figure 6 shows a set of Kidbot instructions that could move the bot to reach the target. In this context, the bot is a human CA that implements a kind of an NM that explains certain behaviours of the real machine using a certain set of program constructs (i.e., F, L, R). The CA also helps in stimulating the learner’s mental model and connects that mental model to the behaviours of an actual computer fairly easily. The CA thereby provides a tangible (visible) representation to an NM.

The programming language used to program the bot consists of just three simple instructions. The bot is always initially placed at the same starting location on the grid facing in a particular direction, and so has a predictable initial state. The program instructions are written based on the bot’s initial state. When the bot starts to blindly follow the instructions in the program, the CA’s behaviour becomes visible to the learner, along with the awareness of the nature of how an actual computer would only be executing a program (follow instructions) but not doing any thinking for them.

Fig. 5.

Fig. 5. A Kidbot activity with kids.

Fig. 6.

Fig. 6. A set of Kidbot instructions.

The Kidbot CA is very small and can be fully understood: it has a very definite behaviour and just three instructions that can be combined into different executable sequences. The behaviour is typically taught by example: the bot is asked to demonstrate the language by moving forward and turning. It is very typical for the first attempt to make the mistake of thinking that the “L” command moves the bot one square to the left instead of turning in the same square—for example, in Figure 6, the instruction to get to the target might (incorrectly) have been FFFLLL. This results in the bot spinning on the spot with the LLL. The bot has previously demonstrated what L does, so they are likely to have an accurate mental model of an NM (although a teacher may need to remind them), and the mistake becomes obvious to the programmer, which enables them to modify their own mental model through this constructivist approach to understanding what is happening.

Of course, an exact definition of the behaviours is not trivial; cases such as reaching the edge of the grid and defining the starting state may be implied to avoid having to give a detailed specification. Students themselves may even be called upon to decide on what happens in these cases, which gives them a role in specifying the behaviour of the CA. Nevertheless, with just a couple of sentences of instructions and some common sense, learners can fully understand the behaviour of this CA. Developing the (Kidbot) NM happens partly through the teacher’s specifications about the CA, and partly through experience and having students make decisions about such details in a constructivist fashion, which supports them to more fully understand the NM.

Such understandings would be reflected either in the learner’s program instructions (e.g., using turn instructions to adjust the bot’s directions) or in their discussions (e.g., verbal instructions to the bot for its initial positioning). Either way, it encourages establishing a good mental model in the learner about the concept that the learning activity intends to communicate, as well as the behavior of the CA aligning well with a very simple NM.

Having no direct connection to a physical computer, the Kidbot NM largely deviates from an NM\(_{physical}\), but it also encompasses most of the attributes of one (e.g., states and rules). The conceptual understanding it represents also resonates with NM\(_{conceptual}\). In the computer-based context, an NM is highly conceptual and difficult to comprehend for a learner, but this activity makes the state of the “computation” highly visible, and shows how a CA can be something a learner can relate to (i.e., a computer or human, in this case the bot), as directed by the teacher.

The system is simple enough that a teacher can diagnose misconceptions through simple tests (e.g., various sequences that use all three instructions). For example, the common initial misconception that “RRR” moves three squares to the right can be diagnosed by asking the learner where such a sequence would take the bot, without physically moving it. The effect of changing the order of instructions in a sequence can also be explored, helping students appreciate how sensitive a sequence can be to this. Moreover, the learner can also self-assess their mental model by simply becoming the bot to execute their own program, or imitating the bot’s movement themselves while programming. The activity also provides opportunities for the students to use language around programming (e.g., testing, bug and debug), as well as understanding the nature of the task (testing and modifying programs to achieve a goal). It introduces the idea that there is more than one correct program to achieve a task, and (in an extended version) that it is possible to achieve the same outcomes with only the F and L commands, introducing them to the notion of a complete language.

The Kidbot programming language is not Turing-complete, yet with the aid of the limited instruction set, the CA (i.e., bot) enables learners to encounter several basic programming concepts, thereby facilitating them to form clearer mental models, and develop language and attitudes that apply to programming a computer. With the aid of the CA, they form a mental model, which may or may not have misconceptions in it depending on how well the teacher communicates an NM behind it (which is still abstract to a great degree, yet could be represented on an actual machine). Moreover, the physicality of the CA helps the teacher visualise the learner’s mental model and diagnose misconceptions easily.

Interestingly, if the developer and bot are both wrong in the same way, then the program is likely to work—for example, both the developer and the bot might assume that the R command moves one square to the right instead of turning. This issue is associated only with human bots but still highlights the result of the programmer’s mental model of the NM being the same as the actual NM of the CA. If a similar activity was done using a physical device such as a Bee-Bot (which has the same commands), the problem of programmer’s mental model not aligning with the bot’s NM would not arise, as the learner is forced to use the NM that the physical device implements.

The bot in this discussion is a good example of a pedagogic representation of an NM that a teacher may use without having to explicitly mention it. The physical presence of the bot (or any CA in an unplugged style example) facilitates the teacher to provide conceptual explanations and detect situations that could otherwise be left only for the mental comprehension by the learner. It shows how a CA can provide a sort of physical, tangible sense to a pedagogic NM that a teacher intends to establish in the learner, as well as how a learner could use it to build their own mental model. At the earliest stages of learning computing, where the learners are mostly expected to develop CT skills rather than learning to program, a physical CA such as a Kidbot can still provide sufficient conceptual representation for the learner to develop a good initial mental model, with or without the need for an explicit NM even in its simplest form. The NM is given in the explanation of the semantics of the instruction set (i.e., what F, L and R mean), but it is very simple, and does not need to be explicit, and can be defined by just a few quick and obvious examples that draw on real life experience.

5.2 Example 2: Different Perspectives of a Computer as a CA

Computers can be used as an educational tool in computing education, particularly in programming education, in various ways. An integrated development environment is a key tool to allow students to write programs, but there are also applications such as algorithm visualisation tools [55] and program visualisation tools [51]. These technologies typically use automated visualisation as their main feature, and sometimes use familiar (and therefore intuitive) graphics and animation. According to Sorva et al. [53, 55], visualisation software can facilitate effective learning by helping the learner with a visualisation of computer memory, not just by showing the steps of a program. The example in this section looks at the role of a computer as a CA and its implications for mental model development and NMs, using Scratch [46] and PythonTutor [26].

Scratch is a popular programming language in introductory programming courses, especially for children. A simple Scratch program can work in a similar way to the Kidbot example (reaching a target on a grid); however, it needs the programmer’s attention to much more detail even in its simplest form. In Scratch programming, the CA is represented by the Sprite (by default, the cat figure) moving in the visual area dedicated to display the program output on a computer (Figure 7). We can understand the Sprite as a visual representation of the computer as a CA. The programmer has to give instructions (i.e., a program) to the Sprite for it to move around to reach the target.

Fig. 7.

Fig. 7. A Kidbot layout.

Although the Scratch programming language is Turing-complete, at this stage, for the learner this is not important, and the programming experience helps them develop mental models quite similar to the Kidbot example in Section 5.1. The programmer has to choose from many Scratch commands, to give precise instructions to the Sprite. Although this is similar to the objective of the Kidbot activity, moving the Sprite around the visual area needs a little more complicated thinking before programming. Figures 8 and 9 show two Scratch programs that can achieve a similar result on the grid shown in Figure 7. Moving the Sprite involves changing its position using moving instructions with a number of steps, and turning instructions in an angle prescribed by the programmer.

In both the Kidbot activity and Scratch example, the direction of movements is tangible, but unlike the Kidbot example, the Scratch program requires a degree of awareness of the scale of a Sprite’s movement (measured in near-invisible pixels) on the visual area of a less tangible visual environment. This is a good opportunity for the learner to realise the limitations of a computer—that is, the fact that the computer merely follows instructions precisely and blindly, and does not think for itself or on the programmer’s behalf. Figure 8 includes a simplest set of instructions to move the Sprite, which corresponds closely to the Kidbot instructions in Figure 6. However, the Sprite movement is barely visible when running this program due to the computer’s fast execution. Moreover, because the Scratch visual environment does not return to its initial state automatically, if the flag is clicked more than once the Sprite continues its next motion starting from previous execution’s destination. In this case, repeated clicks on the flag in the program in Figure 8 causes the Sprite to move in a square, probably outside the anticipated grid area.

Fig. 8.

Fig. 8. A KidBot Scratch program: initial position not defined.

At this point, the teacher can improve the level of abstraction of the intended NM by introducing the concept of ‘initial state’, as shown in Figure 9, with additional two instructions to the program, hence to CA. However, even in this improved version, repeated clicks of the flag does not show any visible movement of the Sprite, due to the computer’s fast execution that is not noticeable to human eye. Despite modelling the Sprite’s movements and programming it correctly (i.e., establishing the foundation of an NM efficiently in the learner), both of these behaviors of the Sprite would most probably be difficult for a novice programmer to understand, and may confuse or frustrate the learner. This can be simply addressed by adding a 1-second ‘wait’ after each movement, which makes the Sprite’s movement easier to observe.

Fig. 9.

Fig. 9. A KidBot Scratch program: initial position defined.

The use of waits to slow movement provides an opportunity for learners to realise the actual potential of their CA (i.e., the computer, and its ability to execute instructions very quickly). The teacher can take the learner beyond the obvious face of the CA (i.e., the Sprite) and thus educate them about the hidden runtime capabilities of a computer, further improving their mental model. The improved, more complex version of the program helps students understand the NM better, exposing hidden ideas like the transaction speed of a computer, an understanding that could not have been achieved with the corresponding Kidbot unplugged activity (i.e., with a human CA).

To look at the computer’s role as a CA in a more sophisticated visualisation tool setting, we now look at PythonTutor, a web-based program visualisation tool for text-based programming, primarily Python. Using PythonTutor, learners can visualise stepping forwards and backwards through the execution of their program, and view the runtime state of data structures. Python employs a well-structured formal language for its instructions. However, what learners may not be aware of is the way the computer behaves, particularly how it allocates or uses its memory, when executing these instructions at runtime. PythonTutor addresses this by representing what is happening. For example, Figure 10 shows a simple variable value assignment example in PythonTutor which addresses a common misconception that a=x means that a will be affected by future changes to x.

Fig. 10.

Fig. 10. A PythonTutor example.

Through step-by-step execution in the PythonTutor interface, the learner gets to see graphically how memory is allocated for each variable after each instruction. The computer becomes a learning interface for the learner to visualise what and how each instruction of their code behaves—in other words, it acts as a view of the CA—whilst the actual computation takes place behind the scenes (the compiler/interpreter, actual memory allocations, etc.). PythonTutor is a more advanced CA than a Sprite, as it shows some unseen aspects of the computer such as the memory locations and the current instruction, although it cannot be a perfect representation of the CA. This demonstrates the two extremes of the NM discussion—that it should be coupled with the physical machine (NM\(_{physical}\)) as well as provide conceptual understanding (NM\(_{conceptual})\). Despite being unaware of the actual back-end technical detail within the computer, learners get to develop a sound “mental visualisation” of their program, led by the computer serving as the CA for their information processing. Meanwhile, the teacher can elaborate the program logic in line with an NM, as well as diagnose any misconceptions and/or errors in learner’s mental models.

Visual representation of CAs are very useful in establishing NMs by developing mental models easily and fairly robustly in the early stages of learning to program. The Sprite in Scratch and the visualisation in PythonTutor are good examples of CAs in CT, but neither completely reflects the exact properties of a real machine. PythonTutor relates more closely to the original definition by du Bolay et al. [23] of an NM that is tightly coupled to the physical device. They facilitate understanding of certain behavior of the real machine, with a focus that is independent to a certain degree of the programming language (e.g., the computer’s limitations in following instructions, or its efficiency), encompassing the definition of an NM. However, they also present a higher conceptual level by providing a metaphorical layer above the actual machine that is hopefully easier to comprehend by the learner, thereby serving as an NM. This shows how closely related the CA and NM can be, at introductory programming learning stages. Nevertheless, at this stage, the learner should be both sufficiently mature in their programming learning process as well as skilled in CT to understand the relatedness of the CA to an implied NM, regardless of whether the CA is completely understood and/or the NM is fully explained.

5.3 Example 3: A Comparison of Two Flowcharts in the Perspective of a CA

Flowcharts provide a simple graphical method of representing programming sequences and algorithms using a standard set of symbols that show program flow, work flow or processes [41]. They accommodate rather diverse programming language concepts in the same framework and are independent of the implementation and organisation of physical computers [58]. Here we consider flowchart notation as a programming language. Flowcharts are known to be Turing-complete [28, 36, 58], although they can be challenging to map onto commonly used programming languages. In this section, we look at two different flowchart examples to see how the CT process is activated in a learner, the role of a CA (human and computer) in that learning process, and how they can contribute to the NM discussion.

The two different flowcharts used here are (1) a procedure to count to five (Figure 11) and (2) a simple emergency fire evacuation procedure (Figure 12 [43]). Both flowcharts use similar symbols and have initial states, Boolean decisions, intermediate process(es) and terminal states. We shall look at the two examples in the context of explaining a scenario to a learner to introduce the concept of conditionals. Employing a CA in both scenarios could be to use either a fellow learner to follow the sequence given in the flowchart or use a computer simulation of the process.

The NM\(_{generic}\) that both examples intend to convey is fairly similar and straightforward: start with the initial state, check whether a Boolean condition is met, and either continue with the intermediate process(es) or terminate (after potentially following a simple sequence). The instructions for the Boolean decision and intermediate process(es) in both flowcharts are apparently precise enough to be followed by a CA blindly. However, the contrasting difference in the two flowcharts is with the presentation of instructions: one uses a formal language and the other a natural language. If a teacher asks a student to act as a CA in the problem-solving process (i.e., they intentionally try to introduce the NM to the learners (including those who are observing)), using either of the examples, it would be straightforward for the student to understand and the teacher’s intentions are likely to be achieved as well.

Fig. 11.

Fig. 11. A flowchart example for counting to five.

In the counting example (see Figure 11), the instructions consist of mathematical operators that correspond directly to that of a programming language. Thus, prior knowledge in mathematics and/or a programming language may support developing a good mental model in the learner, including the learner having a robust NM, although instructions such as ‘x = x+1’ may confuse some learners (which might be improved by modifying the language so that the equals sign is replaced by an arrow). However, due to the use of formal expressions as the instructions (i.e., the Boolean decision and other intermediate instructions), the learner may miss out on some limitations of an actual computer, regardless of the nature of the CA employed. For example, with a human CA, it may involve a mental calculation by the person involved (thereby the computation is overshadowed by the prior mathematical knowledge), and with a computer CA, it may involve an internal computational process that is not necessarily visible to the learner. They are only interacting with an example that may not fully cover the capability of an NM. For example, Figure 11 shows that the comparison ‘x = 5?’ is possible, but the learner might not infer from this that related comparisons are possible, such as ‘x \(\gt\) 0?’, let alone ‘x \(\gt\) 3 OR x = 0?’. Therefore, the learner’s mental model may be incomplete or inaccurate, and the teacher may need to diagnose this by asking questions.

However, converting individual instructions in this flowchart example (see Figure 11) into a computer program written in a programming language with similar operators would be rather straightforward (although converting the structure of the flowchart to a conventional language is challenge because it most naturally maps onto “goto” instructions, which are rarely available in modern languages). Because of the simple link between programs and flowcharts, this example is more directly connected to programming for a learner.

Fig. 12.

Fig. 12. A flowchart example for an emergency fire evacuation procedure [43].

In contrast, distinguishing the limitations of a computer as a device can be made more apparent by employing a human CA with the fire evacuation example (see Figure 12), mostly due to the way its instructions are presented. In other words, despite being a Boolean decision, instructions like “Emergency exits are visible?” are descriptive rather than formal, and therefore would be straightforward for a human CA but would not be a typical instruction in a programming language. These kinds of descriptive instructions do not support a well-defined NM\(_{physical}\) and may prevent a learner forming a sound computational mental model. Moreover, converting this model into a computer program may also not be straightforward, and may involve more cognitive load in converting it into workable program code. Nevertheless, this depends on whether a formal semantics is provided. One could in fact define some simple meaning for them in a specific example using high-level primitives to allow this. However, the reality is that doing this is difficult in general.

With the fire evacuation example (see Figure 12), a teacher may be able to explain some unseen operations that take place within the computer more clearly by employing a CA: with a human CA by explaining how a computer would not be able to comprehend the vague instructions, or with a computer CA by highlighting the need to implement additional, precise instructions. A learner, however, may get to develop a broader mental model of the unseen limitations of a computer than with the counting example, even if they have less prior knowledge in formal language instructions—mathematical or otherwise.

The flowchart examples highlight that the notation alone does not imply well-defined instructions; as with text, the language that it has used is what makes it predictable and/or computational. For example, the Flowgorithm [14] language and our counting example (see Figure 11) use expressions closer to mathematical expressions, whereas the language in an example like the fire evacuation procedure (see Figure 12) could use any natural language instructions desired. In the latter, the instructions may include objective conditions like “are visible”, which can be related to Boolean expressions fairly easily, as well as any subjective phrases like “is safe to use”, which accept a vague, user-relative response that might not even be Boolean. As long as the learning objective does not require the production of an algorithm or a computer program, even a flowchart with ill-defined instructions can equally stimulate CT in a learner and support establishing (and gradually refining) good mental models closer to the NM (precisely NM\(_{generic}\)).

Both flowchart examples provide an abstract graphical modelling of a problem scenario, facilitating the development of a good mental model in the learner. These models are independent of the possible programming language to be used to implement the solution, but instead present precise and step-by-step instructions to arrive at a solution, facilitating CT in learner. The counting example has an ordinary NM (NM\(_{generic}\)) described in terms of operations of a simple computer, whereas the other has a far more abstract NM (NM\(_{conceptual}\)) with primitive operations. The nature of the CA employed with these different examples, however, can be a decisive element in the evolution of the learner’s mental model, in understanding what could have been initially a quite similar concept (e.g., conditionals). In the counting case, the small number of formal language-based instructions used are easier to transform to a computer program; in the evacuation chart, there is an unbounded set of instructions that can be written, even though the flowchart structure uses just a few constructs. In both cases, regardless of their nature, the presence of a CA in the problem-solving process helps teachers bridge the gap between the NM and the learner’s mental model, as well as in diagnosing possible misconceptions or incomplete mental models.

If it is only to be used by a human CA, a flowchart can be useful in forming a useful initial understanding of concepts that can be scaffolded to more robust mental models. Regardless of the nature of instructions (i.e., formal or natural language), they can be useful in stimulating the learner’s CT based on their rational thinking. Modelling tools like flowcharts help learners as well as experienced programmers to organise their thought process. For a strong programmer or a mature learner with comprehensive and established CT skills, the CA is essentially a computer, their mental model is very detailed, and the NM is fully understood so that the conceptualisation is completely reflected in either of them. A highly abstract graphical model like a flowchart might only be a mere visualisation aid that helps organise their rational thinking towards CT.

Skip 6THE MISSING LINK: DISCUSSION Section

6 THE MISSING LINK: DISCUSSION

The nature of a learner’s mental model is, by definition, regularly evolving (and inevitably incorrect at times), fine-tuning itself throughout the process of learning and thereby improving their understanding. The relationship between CA and NM can take various forms, but as discussed previously, the most common is that the CA can be seen as a simplified NM that supports the potential for further development.

An NM is a scaffolding that a teacher provides for the learner in the process of maturing their mental model to a level that sufficiently matches the particular conceptual model that the teacher intends to be established from the learning component/experience. It is a concept that is often not considered explicitly; rather, it is implicitly expected to be established and later diagnosed by the teacher.

The CA as explained by Wing’s CT definition at the start of Section 2 facilitates the learning experience by positioning itself as an abstract yet a more relatable learning tool for learners by being more representative and comprehensible than a full NM. The reason for its relatable nature is partly due to the way the CA is positioned—by definition, it is something that can be either human or machine as long as it executes instructions precisely and blindly (an animate instrument), which makes it a concept more reachable or tangible for both learners as well as teachers, in comparison to the mechanistic yet intangible and complex behavior of an actual computer.

The popular PRIMM (Predict-Run-Investigate-Modify-Make) approach of teaching programming [50] highlights the importance and effectiveness of novice learners’ ability to read, predict and modify simple program code before they actually write their own. It can be seen that in PRIMM, a novice programmer themselves becomes a CA to predict what is happening in a simple program code. Initially, their mental model as well as their NM related to the programming language may be quite weak. While running the program, the computer takes over the role of the CA. During investigation and modify phases, the learner improves their CT and refines their mental model. When they are capable of making a working program on their own, their mental model is much more refined, reflecting a rich interpretation of the NM. In that way, the PRIMM approach captures the close relationships and transitions among CAs (both human and computer) and an NM in the process of learning to program, as well as in learners improving their CT.

In teaching students the basic concepts of computing, a teacher should concentrate their efforts on communicating the computing content that transforms the student’s view of the discipline whilst making the learning of the concepts easier [39]. For teachers new to computing in particular, a physical CA such as a Kidbot or Bee-Bot provides a tangible perspective of the NM that can support discussion, especially when they are not sufficiently familiar with the implied NM of programming languages they are supposed to be teaching, or to be using to test the learning. However, it is important that both learners and teachers are aware of the need that the CA of their interest must follow instructions precisely and blindly as an essential characteristic of a CA.

If a human CA is restricted to follow only computationally implementable instructions (e.g., comparing two values, flipping a card or following a sequence), then an unplugged activity (e.g., find the maximum of a set of weights by comparing two at a time) transfers comfortably to programming and enables learners to see the connection. Students are able to experience the computation in a physical context yet can transfer it directly into a computer program. That way, the CA becomes closely relatable to an NM that a program intends to establish. However, if a human CA is allowed to act loosely, their rational behaviour may interfere with the mechanical nature expected of a CA, and it may become more analogical. That prevents the human CA from being a faithful manifestation of the (intended) NM. For example, a human CA is capable of changing the order of instructions, or interpreting ambiguous instructions based on their understanding of the goal, unless they are restricted. Changing the order of the first two instructions in the program code

may help a student to develop a better mental model by recognising the change in the outcome, without affecting the NM at all, regardless of the CA being a human or machine. In contrast, changing the order of the first two instructions of a cookie making recipe

may not affect the NM (or the outcome), and the instruction “mix well” is open to interpretation. In this way, the use of a human CA may bring in the risk that existing knowledge is applied to interpret or even change the instructions in a way that still brings about the intended outcome, whereas a machine CA will follow instructions strictly. This is not to say that the recipe example should not be used, but it requires some caution to avoid misconceptions.

However, regardless of the instructions being precise or not, in the absence of the strict restriction to follow the instructions literally, a human CA can bring in prior knowledge (e.g., a human CA might instinctively mix the two ingredients even if the instruction ‘mix well’ comes in a different order or was even removed), which may undo some of the essential nature of an NM. Another example could be getting students to act out a sorting algorithm. They can see where they would end up, and so might not use the algorithm, whereas the activity may work objectively better with “blind” students who cannot see their destination but need to rely on using the algorithm [34].

After all, computation is a series of numeric calculations and symbol manipulations. Only when the human CA has a similar NM to a computer (i.e., a machine CA) does the transfer of knowledge becomes more direct. Descriptive instructions written in natural language in a teaching example may work when a human is employed as a CA but will require additional effort in transferring them to computation. In other words, such examples may be useful in developing an initial mental model in the learners yet may not be reliable tools for maturing them to have a more robust and pedagogically efficient mental model of an NM in the long run.

An unplugged activity designed with a focus to teach computational concept(s) (e.g., using a scale to compare two weights as demonstration of an “if” statement) will inevitably use a human or a simple deterministic device as a CA that is a simplified version of an implied NM that is closer to a computer. Alternating a plugged-in (programming) experience that directly correlates with such an activity will be more effective in helping learners develop mental models closer to the NM a teacher intends to convey. The presence of a tangible CA facilitates a conceptual relationship between the learner’s mental model and NM, that otherwise might have been only a tiresome mental exercise.

The roles of the CA and NM are different at different levels of development (of the learner and of the curriculum). A panoramic perspective on the relationship between the two concepts in computing and learning to program can encompass several stages:

(1)

Earliest stage of CT: The CA can be physical (e.g., Kidbot), and is so simple that an NM is not necessary since the whole language is very easily described, and may not be completely precise.

(2)

Very early programming stage where CT is somewhat stronger: The CA is a system (either a computer, deterministic device or a human; e.g., a Sprite, a Bee-Bot or a Kidbot that executes programs) and the NM specifies the capabilities of that system (i.e., what it can and cannot do). At this stage, the CA and the NM are essentially the same system to the learner (they are the properties of the thing that executes programs).

(3)

Developing programming stage with higher-level CT: The CA is a computer (still not explained in exact detail, yet the instructions to the computer are clearer), and NMs of different levels of abstraction can help when learning to understand the CA.

(4)

Strong programming stage where CT is inherent: The CA is essentially a computer, and a rich and detailed NM represents the CA specifically and accurately.

Another special stage that is highly abstract and conceptual can also be explained where a CA does not physically exist (e.g., Difference Engine, Quantum Computer) yet the NM defines what a CA could be.

It can be seen that learning to exercise CT begins with a CA rather than an NM that a learner cannot relate to initially. As the learning improves and as CT scaffolds to incorporate programming into the learning process, the roles of the CA and NM also improve subsequently, where at first the CA and the NM are essentially the same: properties of a system possibly explained by a set of instructions. With more experience and awareness about the physical computer, the learner begins to fully understand the computer as their CA and the NM helps them understand advanced and complex conceptualisations of the device. A higher level of abstraction can be useful in conceptualisation of advanced computation with the aid of highly abstract NMs, even with no known CA.

Skip 7CONCLUSION Section

7 CONCLUSION

Physical activities are a valuable learning tool that provide a different medium for students to experience computation, and reinforce that computational ideas can exist independently of computers. However, there is a risk that some analogous activities (e.g., following recipes or getting dressed) are sufficiently different from computation that they may be misleading. Nevertheless, physical activities are available that impose rules that focus on what is computationally reasonable, and these will be more effective at scaffolding students in building their own mental models of an NM.

We hypothesise that CAs pave a way to understand computational concepts (including programming), and the establishment of a robust NM in teaching and learning computing. The nature and flexibility that allows the CA to be a human could be useful in teaching computational concepts, especially at early programming learning stages (e.g., grade school level). Developing a CA that becomes closer and closer to an NM can help students develop more accurate mental models. The relationship of a CA to an NM helps teachers (particularly novices in computing) bridge the gap between a conceptual model of computing such as an NM, and the mental model created by the learner. Having a fellow student or a semi-deterministic device as a (tangible) CA enables learners to exercise their model instead of it being hidden in the machine, giving them deeper insights into what a computer can achieve.

In the process of teaching and learning, a CA facilitates teachers’ consciousness of their own mental model against the NM they are expected to teach, especially if the teachers are computing novices. The semantics of a CA are easier for a teacher to understand, and therefore enforce, whereas in a programming language a novice teacher might struggle to know exactly what will happen. CT definitions have provided a degree of flexibility over the nature of the corresponding CA (i.e., either a human or a computer), which has potentially eliminated the need for novice teachers to be aware of NMs altogether, yet conveying a similar learning context. If the CA is an actual computer, then the NM is very easy to recognise, and if it is a human, then using appropriate unplugged activities can enforce the human to use a well-defined set of instructions (e.g., move forward/right, or only compare two values at a time) to establish a more abstract understanding of one of several NMs that describes computation. If the human CA does not have restrictions on their actions (as in free text instructions, or a natural language based flow chart of instructions), then we do not have a simple NM, and therefore a clear mental model would not be possible, and indeed many aspects of computation cannot be experienced by the student. When an abstract NM is defined without reference to an actual machine, a CA presented with a set of precise yet less detailed instructions (i.e., using abstract operations rather than a certain programming language) can be useful for pedagogic explanations.

Models of computation are very simple and highly visible within the unplugged style learning activities; they provide simple rules (e.g., “compare two values at a time”) and are designed to scaffold students to understand genuine computational challenges (e.g., sorting algorithms, data representation and intractable problems). The tangible nature of the CA in unplugged activities potentially enables establishing robust mental models in learners and helps teachers scaffold them to accurate NMs. Moving from unplugged to “plugging in” enables teachers to establish increasingly richer mental models in learners, as well as allowing learners to mature their mental models by avoiding/realising possible misconceptions when they relate their prior understanding, from the physical experience to understanding the expected NM. This also gives insight into why a combination of unplugged and programming experience can be effective, if the NM has commonalities between the two contexts.

The idealised computer defined as the NM is used as a pedagogic device to explain the hidden aspects of computing to a learner. NMs are often described as being operated with the constructs of the programming language it employs but can also be a special kind of conceptual model that represents something that can be interacted with, even mentally. Therefore, they can exist even before being implemented on a real machine, when students are learning new concepts. NMs are useful for describing a range of simple to advanced conceptual models in computing. When transferring these NMs from a teacher’s mental space to that of a learner, such pedagogy requires relating those NMs to existing knowledge using some form of representation. CT describes the thoughts and ideas that can be used to implement computational processes. The CA as defined in CT can build a link between a learner’s mental model and an NM, which is easy to understand, useful in a teaching and learning experience, and computationally implementable. That way, the NM and the CA can be closely related. The closer the properties and structure of the CA used are to the NM, the easier it is to turn ideas into an accurate computer program.

In the end, the concept of an NM is complex, and the exploration of relationships in this writing is intended to help the reader explore the range of interpretations that appear in the literature. But we need to be cautious about drawing black and white conclusions about them. Simply put, the CA is a “closed box” that we know can follow instructions (without needing to know how it works). The NM is a “box we can open” to see (a conceptual abstraction of) how it works. Using these boxes meaningfully at different stages of learning allows learners to give more effective/efficient instructions (write better programs) because they can develop richer mental models and plans (especially in the case of complex instruction sets like a programming language!).

We have made a strong case through our examples that teaching programming should make use of the idea of CA more as a very useful link that can effectively connect a learner’s mental model to a robust NM. The co-existence of CA and NM changes over the learning stages of the learner. Since the two concepts are complementary to each other, teachers can define abstract NMs appropriately to communicate computing concepts to a learner’s mental model and use meaningful CAs as a versatile link at different stages in the process. In education, the CA in CT can be seen as an NM that is simplified but constantly developing, and thus the CA is an important missing link that connects the learner’s mental model to a full NM.

Skip ACKNOWLEDGEMENTS Section

ACKNOWLEDGEMENTS

We are grateful to the anonymous reviewers for their valuable feedback. The images used in Figures 1, 2, 3 and 4 were designed using resources from Flaticon.com.

Footnotes

  1. 1 In fact, Lovelace used trace tables [30], but nevertheless she was reasoning about a machine that did not exist physically.

    Footnote

REFERENCES

  1. [1] Wikipedia. n.d. Algorithm. Retrieved October 26, 2023 from https://en.wikipedia.org/wiki/AlgorithmGoogle ScholarGoogle Scholar
  2. [2] Code.org. n.d. Home Page. Retrieved September 1, 2021 from https://code.org/Google ScholarGoogle Scholar
  3. [3] CS Unplugged. n.d. Home Page. Retrieved November 9, 2020 from https://csunplugged.orgGoogle ScholarGoogle Scholar
  4. [4] Merriam-Webster. n.d. Dictionary by Merriam-Webster: America’s most-trusted online dictionary. Retrieved September 20, 2020 from https://www.merriam-webster.com/Google ScholarGoogle Scholar
  5. [5] CS Unplugged. n.d. Kidbots. Retrieved November 9, 2020 from https://csunplugged.org/en/at-home/kidbots/Google ScholarGoogle Scholar
  6. [6] Aho Alfred V.. 2012. Computation and computational thinking. Computer Journal 55, 7 (2012), 832835. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. [7] Aristotle and Ernest Sir Barker,. 1958. The Politics of Aristotle. Oxford Press, New York, NY.Google ScholarGoogle Scholar
  8. [8] Barr David, Harrison John, and Conery Leslie. 2011. Computational thinking: A digital age skill for everyone. Learning and Leading with Technology 38, 6 (2011), 20.Google ScholarGoogle Scholar
  9. [9] Berry Michael and Kölling Michael. 2013. The design and implementation of a notional machine for teaching introductory programming. In Proceedings of the 8th Workshop in Primary and Secondary Computing Education. 2528.Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. [10] Berryman Sylvia. 2003. Ancient automata and mechanical explanation. Phronesis 48, 4 (Jan. 2003), 344369. Google ScholarGoogle ScholarCross RefCross Ref
  11. [11] Böhm Corrado and Jacopini Giuseppe. 1966. Flow diagrams, turing machines and languages with only two formation rules. Communications of the ACM 9, 5 (1966), 366371. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. [12] Boulay Benedict Du. 1986. Some difficulties of learning to program. Journal of Educational Computing Research 2, 1 (1986), 5773. Google ScholarGoogle ScholarCross RefCross Ref
  13. [13] Bower Matt and Falkner Katrina. 2015. Computational thinking, the notional machine, pre-service teachers, and research opportunities. In Proceedings of the 17th Australasian Computing Education Conference (ACE ’15). 37–46.Google ScholarGoogle Scholar
  14. [14] Cook Devin. n.d. Flowgorithm Programming Language. Retrieved January 8, 2021 from http://www.flowgorithm.org/Google ScholarGoogle Scholar
  15. [15] Copeland Jack B.. 2020. The church-turing thesis. In The Stanford Encyclopedia of Philosophy (Summer 2020 ed.), Zalta Edward N. (Ed.). Metaphysics Research Lab, Stanford University, Stanford, CA.Google ScholarGoogle Scholar
  16. [16] Corradini Isabella, Lodi Michael, and Nardelli Enrico. 2017. Computational thinking in Italian schools: Quantitative data and teachers’ sentiment analysis after two years of “programma Il futuro.” In Proceedings of the 2017 ACM Conference on Innovation and Technology in Computer Science Education (ITiCSE ’17). ACM, New York, NY, 224229. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. [17] Corradini Isabella, Lodi Michael, and Nardelli Enrico. 2017. Conceptions and misconceptions about computational thinking among Italian primary school teachers. In Proceedings of the 2017 ACM Conference on International Computing Education Research (ICER ’17). ACM, New York, NY, 136144. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. [18] Curzon Paul, Bell Tim, Waite Jane, and Dorling Mark. 2019. Computational thinking. In The Cambridge Handbook of Computing Education Research, Fincher Sally A. and Robins Anthony V.Editors (Eds.). Cambridge Handbooks in Psychology. Cambridge University Press, 513546. Google ScholarGoogle ScholarCross RefCross Ref
  19. [19] Denning Peter J.. 2017. Remaining trouble spots with computational thinking. Communications of the ACM 60, 6 (May2017), 3339. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. [20] Denning Peter J. and Tedre Matti. 2019. Computational Thinking. MIT Press, Cambridge, MA.Google ScholarGoogle ScholarCross RefCross Ref
  21. [21] Dickson Paul E., Brown Neil C. C., and Becker Brett A.. 2020. Engage against the machine: Rise of the notional machines as effective pedagogical devices. In Proceedings of the 2020 ACM Conference on Innovation and Technology in Computer Science Education (ITiCSE ’20). ACM, New York, NY, 159165. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. [22] Dijkstra Edsger W.. n.d. On the Necessity of Correctness Proofs. Retrieved October 26, 2023 from https://www.cs.utexas.edu/users/EWD/ewd03xx/EWD360.pdfGoogle ScholarGoogle Scholar
  23. [23] Boulay Benedict Du, O’Shea Tim, and Monk John. 1999. The black box inside the glass box: Presenting computing concepts to novices. International Journal of Human-Computer Studies 51, 2 (1999), 265277. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. [24] Duran Rodrigo, Sorva Juha, and Seppälä Otto. 2021. Rules of program behavior. ACM Transactions on Computing Education 21, 4 (2021), Article 33, 37 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. [25] Fincher Sally, Jeuring Johan, Miller Craig S., Donaldson Peter, Boulay Benedict du, Hauswirth Matthias, Hellas Arto, Hermans Felienne, Lewis Colleen, Mühling Andreas, Pearce Janice L., and Petersen Andrew. 2020. Notional machines in computing education: The education of attention. In Proceedings of the Working Group Reports on Innovation and Technology in Computer Science Education (ITiCSE-WGR ’20). ACM, New York, NY, 2150. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. [26] Guo Philip J.. 2013. Online Python tutor: Embeddable web-based program visualization for Cs education. In Proceeding of the 44th ACM Technical Symposium on Computer Science Education (SIGCSE ’13). ACM, New York, NY, 579584. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. [27] Guzdial Mark, Krishnamurthi Shriram, Sorva Juha, and Vahrenhold Jan. 2019. Notional machines and programming language semantics in education (Dagstuhl Seminar 19281). Dagstuhl Reports 9, 7 (2019), 123. Google ScholarGoogle ScholarCross RefCross Ref
  28. [28] Harel D., Norvig P., Rood J., and To T.. 1979. A universal flowcharter. In Proceedings of the 2nd Computers in Aerospace Conference. 218–224.Google ScholarGoogle ScholarCross RefCross Ref
  29. [29] Hermans Felienne and Aivaloglou Efthimia. 2017. To scratch or not to scratch? A controlled experiment comparing plugged first and unplugged first programming lessons. In Proceedings of the 12th Workshop on Primary and Secondary Computing Education (WiPSCE ’17). ACM, New York, NY, 4956. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. [30] Hollings C., Martin U., and Rice A. C.. 2018. Ada Lovelace: The Making of a Computer Scientist. Bodleian Library. 85024040Google ScholarGoogle Scholar
  31. [31] Ioannidou Andri, Bennett Vicki, Repenning Alexander, Koh Kyu Han, and Basawapatna Ashok. 2011. Computational thinking patterns. In Proceedings of the 2011 American Educational Research Association Annual Meeting. https://www.learntechlib.org/p/108975Google ScholarGoogle Scholar
  32. [32] Johnson-Laird Philip N.. 1983. Mental Models: Towards a Cognitive Science of Language, Inference, and Consciousness. Harvard University Press, Cambridge, MA.Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. [33] Krishnamurthi Shriram and Fisler Kathi. 2019. Programming paradigms and beyond. In The Cambridge Handbook of Computing Education Research, Sally A. Fincher and Anthony V. Robins (Eds.). Cambridge University Press, 377413. Google ScholarGoogle ScholarCross RefCross Ref
  34. [34] Ladner Richard. n.d. Computer Science Unplugged (AccessComputing News Feb 2009). Retrieved October 26, 2023 from https://www.washington.edu/doit/book/export/html/5288Google ScholarGoogle Scholar
  35. [35] Lee Irene. 2016. Reclaiming the roots of CT. CSTA Voice: The Voice of K–12 Computer Science Education and Its Educators 12, 1 (2016), 34.Google ScholarGoogle Scholar
  36. [36] Liberti Leo and Marinelli Fabrizio. 2014. Mathematical programming: Turing completeness and applications to software analysis. Journal of Combinatorial Optimization 28, 1 (2014), 82104.Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. [37] Morreale Patricia and Joiner David A.. 2011. Reaching future computer scientists. Communications of the ACM 54, 4 (2011), 121124. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. [38] Munasinghe Bhagya. 2023. Programming Unplugged: Insights from Theoretical Models and Teacher Experiences. Ph. D. Dissertation. UC Research Repository. Google ScholarGoogle ScholarCross RefCross Ref
  39. [39] Munasinghe Bhagya, Bell Tim, and Robins Anthony. 2021. Teachers’ understanding of technical terms in a computational thinking curriculum. In Proceedings of the Australasian Computing Education Conference (ACE ’21). ACM, New York, NY, 106114. Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. [40] Munasinghe Bhagya, Bell Tim, and Robins Anthony. 2023. Unplugged activities as a catalyst when teaching introductory programming. Journal of Pedagogical Research 7, 2 (2023), 5671. Google ScholarGoogle ScholarCross RefCross Ref
  41. [41] Myler Harley R.. 1998. Fundamentals of Engineering Programming with C and FORTRAN. Cambridge University Press.Google ScholarGoogle ScholarCross RefCross Ref
  42. [42] Nelson Greg L., Xie Benjamin, and Ko Amy J.. 2017. Comprehension first: Evaluating a novel pedagogy and tutoring system for program tracing in CS1. In Proceedings of the 2017 ACM Conference on International Computing Education Research (ICER ’17). ACM, New York, NY, 211. Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. [43] Pluchino Alessandro, Garofalo Cesare, Inturri Giuseppe, Rapisarda Andrea, and Ignaccolo Matteo. 2014. Agent-based simulation of pedestrian behaviour in closed spaces: A museum case study. Journal of Artificial Societies and Social Simulation 17, 1 (2014), 16. Google ScholarGoogle ScholarCross RefCross Ref
  44. [44] Pratchett Terry, Stewart Ian, and Cohen Jack. 1999. The Science of Discworld. Ebury Press, London, UK.Google ScholarGoogle Scholar
  45. [45] Qualls Jake A. and Sherrell Linda B.. 2010. Why computational thinking should be integrated into the curriculum. Journal of Computing Sciences in Colleges 25, 5 (2010), 6671.Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. [46] Resnick Mitchel, Maloney John, Monroy-Hernández Andrés, Rusk Natalie, Eastmond Evelyn, Brennan Karen, Millner Amon, Rosenbaum Eric, Silver Jay, Silverman Brian, and Kafai Yasmin. 2009. Scratch: Programming for all. Communications of the ACM 52, 11 (Nov.2009), 6067. Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. [47] Robins Anthony, Rountree Janet, and Rountree Nathan. 2003. Learning and teaching programming: A review and discussion. Computer Science Education 13, 2 (2003), 137172.Google ScholarGoogle ScholarCross RefCross Ref
  48. [48] Selby Cynthia and Woollard John. 2013. Computational Thinking: The Developing Definition. Technical Report. University of Southampton Highfield.Google ScholarGoogle Scholar
  49. [49] Sentance Sue and Csizmadia Andrew. 2017. Computing in the curriculum: Challenges and strategies from a teacher’s perspective. Education and Information Technologies 22, 2 (2017), 142158. Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. [50] Sentance Sue and Waite Jane. 2017. PRIMM: Exploring pedagogical approaches for teaching text-based programming in school. In Proceedings of the 12th Workshop on Primary and Secondary Computing Education (WiPSCE ’17). ACM, New York, NY, 113114. Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. [51] Shaffer Clifford A., Cooper Matthew L., Alon Alexander Joel D., Akbar Monika, Stewart Michael, Ponce Sean, and Edwards Stephen H.. 2010. Algorithm visualization: The state of the field. ACM Transactions on Computing Education 10, 3 (Aug. 2010), Article 9, 22 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. [52] Shute Valerie J., Sun Chen, and Asbell-Clarke Jodi. 2017. Demystifying computational thinking. Educational Research Review 22 (2017), 142158.Google ScholarGoogle ScholarCross RefCross Ref
  53. [53] Sorva Juha. 2012. Visual Program Simulation in Introductory Programming Education. Ph. D. Dissertation. Aalto University.Google ScholarGoogle Scholar
  54. [54] Sorva Juha. 2013. Notional machines and introductory programming education. ACM Transactions on Computing Education 13, 2 (July 2013), Article 8, 31 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. [55] Sorva Juha, Karavirta Ville, and Malmi Lauri. 2013. A review of generic program visualization systems for introductory programming education. ACM Transactions on Computing Education 13, 4 (Nov. 2013), Article 15, 64 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. [56] Wing Jeannette M.. 2006. Computational thinking. Communications of the ACM 49, 3 (March2006), 3335. Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. [57] Wing Jeannette M.. 2010. Computational Thinking: What and Why? Retrieved January 8, 2021 from http://www.cs.cmu.edu/CompThink/papers/TheLinkWing.pdfGoogle ScholarGoogle Scholar
  58. [58] Yokoyama Tetsuo, Axelsen Holger B., and Glück Robert. 2016. Fundamentals of reversible flowchart languages. Theoretical Computer Science 611 (2016), 87115. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Computational Thinking and Notional Machines: The Missing Link

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image ACM Transactions on Computing Education
      ACM Transactions on Computing Education  Volume 23, Issue 4
      December 2023
      213 pages
      EISSN:1946-6226
      DOI:10.1145/3631944
      • Editor:
      • Amy J. Ko
      Issue’s Table of Contents

      Copyright © 2023 Copyright held by the owner/author(s).

      This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs International 4.0 License

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 11 December 2023
      • Online AM: 18 October 2023
      • Accepted: 28 September 2023
      • Revised: 3 September 2023
      • Received: 26 January 2022
      Published in toce Volume 23, Issue 4

      Check for updates

      Qualifiers

      • research-article
    • Article Metrics

      • Downloads (Last 12 months)664
      • Downloads (Last 6 weeks)144

      Other Metrics

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader