Skip to main content
Log in

Triviality arguments against functionalism

  • Published:
Philosophical Studies Aims and scope Submit manuscript

Abstract

“Triviality arguments” against functionalism in the philosophy of mind hold that the claim that some complex physical system exhibits a given functional organization is either trivial or has much less content than is usually supposed. I survey several earlier arguments of this kind, and present a new one that overcomes some limitations in the earlier arguments. Resisting triviality arguments is possible, but requires functionalists to revise popular views about the “autonomy” of functional description.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Notes

  1. According to Cleland (2002), Hinckfuss' argument was presented in a 1978 discussion of computation at the Australasian Association of Philosophy. Lycan (personal communication) says the discussion was during presentation of an early version of Lycan (1981) at the conference, a paper which then appeared with a presentation of the Hinckfuss argument. Lycan treats the argument as something different from a triviality argument in my sense, however; Lycan says the bucket of water might, by chance, come to realize a human's functional organization over some interval.

  2. Although it is difficult to say exactly what computationalism about the mind is committed to, it is intended to be a stronger claim than functionalism (Smith 2002; Piccinini 2004). Computationalism is supposed to involve a claim about particular characteristics of the functionally characterized operations that comprise cognition.

  3. Lycan (1981) treats this as part of the answer to Hinckfuss' pail.

  4. Technically, this is a "Mealy machine," not a "Moore machine," as the outputs are associated with transitions rather than states. Some early discussions of functionalism focused on Turing machines. I take Turing machines themselves to be an unpromising model for the mind, though important for in principle discussions of the mechanization of intelligence. The CSA framework, discussed below, can be used to represent Turing machines, as Chalmers (1996) notes.

  5. If a functionalist does not see the functional roles relevant to philosophy of mind as involving specific kinds of inputs and outputs, perhaps because of cases of humans with unusual interfaces with the world, then the mapping approach can be used on its own. This makes Hinckfuss-type arguments more threatening. This issue will be discussed in Sect. 4.

  6. The uniqueness claim here is intended to apply to intrinsic properties of the system, to avoid it collapsing into triviality. I assume an account of intrinsicness along the lines of Langton and Lewis (1998).

  7. As a referee pointed out, this has the consequence that a system might have the dispositions to transition (given suitable input) from S1 to S2 at one time step and (also given suitable input) from S2 to S3 at that same time-step, without being disposed to transition from S1 to S2 and then to S3, given those inputs in series over multiple time-steps. But this result is appropriate, as it may well be that one consequence of receiving either input at the first time-step is to disable the system with respect to further transitions.

  8. We should probably also stipulate, as Susanna Rinard pointed out, that when a transducer layer is changed, the general kind of interface it has with the control system is preserved. Some transducer layers may interface lethally with some control systems.

  9. I am indebted to an anonymous referee for suggesting a simplified summary of the argument, which I have adapted here.

  10. I am indebted to Nick Shea for comments substantially improving this part of the argument.

  11. A member of an audience at a conference at Aarhus, 2005, suggested this example.

  12. Chalmers (personal communication) has argued that the combinatorial requirement is stronger than I acknowledge here. Instead, each of the physical states mapped to C11 have to produce the right behavior when combined with each of the physical states mapped to C21 and also C22. This is a possible interpretation of the conditionals linking the coarse-grained physical states, but it is too strong an interpretation for functionalist purposes. Here the discussion at the end of Sect. 2 is again relevant. In the case of a system that ages or undergoes other kinds of physical development, this stronger combinatorial requirement would require that the system behave appropriately when one part of it is in a physical state characteristic of early stages in life, and the other parts of the system are in physical states characteristic of late stages in life. This surely is not required for realization of a CSA. Often system when it is older will be realizing a different CSA altogether, of course, but it is surely possible to realize the same CSA while the physical parts of the system develop through time.

  13. Here again I include cases where the theory is folk-theoretic and cases where it is scientific.

References

  • Block, N. (1978). Troubles with functionalism. In C. W. Savage (Ed.), Perception and cognition: Issues in the foundations of psychology (pp. 261–325). Minneapolis: University of Minnesota Press.

    Google Scholar 

  • Block, N. (1981). Psychologism and behaviorism. Philosophical Review, 90, 5–43.

    Article  Google Scholar 

  • Block, N., & Fodor, J. A. (1972). What psychological states are not. Philosophical Review, 90, 159–181.

    Article  Google Scholar 

  • Braddon-Mitchell, D., & Jackson, F. (1996). The philosophy of mind and cognition. Oxford: Blackwell.

    Google Scholar 

  • Chalmers, D. (1996). Does a rock implement every finite-state automaton? Synthese, 108, 309–333.

    Article  Google Scholar 

  • Churchland, P. S. (1989). Neurophilosophy. Cambridge, MA: MIT Press.

    Google Scholar 

  • Cleland, C. (2002). On effective procedures. Minds and Machines, 12, 159–179.

    Article  Google Scholar 

  • Copeland, J. (1996). What is computation? Synthese, 108, 335–359.

    Article  Google Scholar 

  • Crane, T. (1995). The mechanical mind: A philosophical introduction to minds, machines and mental representation. London: Routledge.

    Google Scholar 

  • Fodor, J. A. (1974). Special sciences (or the disunity of science as a working hypothesis). Synthese, 28, 97–115.

    Article  Google Scholar 

  • Fodor, J. A. (1981). Representations. Cambridge, MA: MIT Press.

    Google Scholar 

  • Langton, R., & Lewis, D. (1998). Defining ‘intrinsic. Philosophy and Phenomenological Research, 58, 333–345.

    Article  Google Scholar 

  • Lewis, D. (1972). Psychophysical and theoretical identifications. Australasian Journal of Philosophy, 50, 249–258.

    Article  Google Scholar 

  • Lewis, D. (1994). Reduction of mind. In S. Guttenplan (Ed.), A companion to the philosophy of mind (pp. 413–431). Blackwell: Oxford.

    Google Scholar 

  • Lycan, W. (1981). Form, function, and feel. Journal of Philosophy, 78, 24–50.

    Article  Google Scholar 

  • Machamer, P., Craver, C., & Darden, L. (2000). Thinking about mechanisms. Philosophy of Science, 67, 1–25.

    Article  Google Scholar 

  • Piccinini, G. (2004). Functionalism, computationalism, and mental states. Studies in the History and Philosophy of Science, 35, 811–833.

    Article  Google Scholar 

  • Putnam, H. (1960). Minds and machines. Reprinted in H. Putnam, Mind, language, and reality. Philosophical papers (Vol. 2, pp. 362–385). Cambridge: Cambridge University Press.

  • Putnam, H. (1988). Reality and representation. Cambridge, MA: MIT Press.

    Google Scholar 

  • Searle, J. (1990). Is the brain a digital computer? Proceedings and Addresses of the American Philosophical Association, 64, 21–37.

  • Smith, B. (2002). The foundations of computing. In M. Scheutz (Ed.), Computationatism: New directions (pp. 23–58). Cambridge, MA: MIT Press.

    Google Scholar 

  • Stich, S. (1983). From folk psychology to cognitive science: The case against belief. Cambridge, MA: MIT Press.

    Google Scholar 

Download references

Acknowledgments

I am grateful to David Chalmers, Alan Hayek, Peter Koellner, William Lycan, Susanna Rinard, Nick Shea, and a referee for the journal for very helpful comments on earlier drafts. The paper also benefited from audience comments during presentations at the Australian National University and Oxford University.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Peter Godfrey-Smith.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Godfrey-Smith, P. Triviality arguments against functionalism. Philos Stud 145, 273–295 (2009). https://doi.org/10.1007/s11098-008-9231-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11098-008-9231-3

Keywords

Navigation