Abstract
Many optimistic responses have been proposed to bridge the threat of responsibility gaps which artificial systems create. This paper identifies a question which arises if this optimistic project proves successful. On a response-dependent understanding of responsibility, our responsibility practices themselves at least partially determine who counts as a responsible agent. On this basis, if AI or robot technology advance such that AI or robot agents become fitting participants within responsibility exchanges, then responsibility itself might be engineered. If we have good reason to think such technological advances are likely, then we should take steps to address the potential for engineering responsibility.
Similar content being viewed by others
Notes
Nyholm (2020) for extensive discussion of the many ethical issues current robot technology raises.
Shoemaker’s pluralistic approach relies on distinguishing particular sentiments which correspond with each type of responsibility. I’m not convinced by that aspect of the view, so in what follows I give what I take to be a broader and less controversial description of each type.
There is a natural connection between answerability and legal responsibility (Coeckelbergh, 2020); however, to reiterate, this paper is concerned solely with moral responsibility.
See McKenna (2012) for a “conversational” approach to moral responsibility which is based on the natural structure of these kinds of attitudinal exchange.
The contrast is sometimes expressed in terms of a “being responsible” vs. “holding responsible” distinction which can be seen in the following characteristic biconditional: “S is morally responsible (for action x) if and only if it would be appropriate to hold S morally responsible (for action x)” (Wallace, 1994, 91). Response-independent theorists give explanatory priority to the left side—the “being” side—of the biconditional, while response-dependent theorists give explanatory priority to the right side—the “holding” side—of the biconditional.
In emphasizing the reactive attitudes, my argument thus focuses on accountability; however, the response-dependent approach to responsibility encompasses attributability and answerability as well. For example, a discussion of response-dependent attributability might focus on responses such as admiration or disdain.
For an extensive argument in favor of reading Strawson’s view as response-dependent, see Shoemaker (2022).
For readers skeptical of the notion of robots responding with resentment, again consider Star Wars: most viewers easily recognize when R2-D2’s bleeps and bloops are indignant.
A referee correctly points out that Coeckelbergh’s and Danaher’s points need not entail moral agency but possibly only concern whether robots or artificial systems should be considered moral patients. As I discuss in the next section, the response-dependent approach might support a more radical shift which turns away from these kinds of moral status claims.
Compare Joel Krueger and Lucy Osler’s (2019) discussion of external means of influencing our affective lives.
Note that Strawson’s view has appealed to many as a response to the problem of determinism. Frischmann and Selinger intentionally elide nuances within the free will debate (2018, 301–3); thus, though my purpose here is not to confront directly their arguments, the present discussion suggests that a Strawsonian response to their idea of “engineered determinism” might be compelling.
See McGeer (2014) for an interpretation of Strawson’s approach to responsibility in terms of a two-tiered indirect consequentialism.
The “moral” in “moral ecology” derives from Vargas’s agency cultivation model, which includes an element of aiming toward moral improvement; however, the idea need not be moralized. We might instead think in terms of an “interpersonal” or “social” ecology.
Interestingly, a relational ethics might undermine the (indirect) consequentialist aspects of Vargas’s and McGeer’s approaches to our responsibility practices. A closer examination of the relationship between relational ethics and indirect consequentialism would seem warranted because the latter arguably can avoid the two problems Coeckelbergh uses to motivate the former.
This suggests an addendum to John-Stewart Gordon and David Gunkel’s (2022) recent discussion of technological advances and moral status. Consistent with Coeckelbergh, Gordon and Gunkel argue that innovations in intelligent and autonomous machines will allow us to reconsider our moral theories and concepts. Although Gordon and Gunkel’s discussion is in terms of technology prompting changes in theories, I think the idea would be put better in terms of seeking a reflective equilibrium between our intuitions about (and attitudes toward) the innovations, on one hand, and our moral (and moral responsibility) theories and concepts, on the other hand. I take this point to be consistent with Gordon and Gunkel’s view.
A referee aptly wonders why the focus should be on engineers. Should not all members of the moral community be concerned to improve our moral responsibility practices? In response, I suggest that the broader moral question cannot be addressed until further considerations such as those in footnote 14 are pursued.
See Frigo et al., (2021) for discussion of a similar program at Karlsruhe Institute of Technology. For an alternative model which does not prioritize character-based ethics but does seek to embed ethics throughout a related curriculum, see the discussion of Embedded EthiCS at Harvard University in Grosz et al., (2018).
On the role phronesis can play within philosophy and technology more broadly, see Vallor (2016).
References
Anscombe, G. E. M. (1958). Modern moral philosophy. Philosophy, 33, 1–19
Björnsson, G., & Hess, K. (2017). Corporate crocodile tears? On the reactive attitudes of corporate agents. Philosophy and Phenomenological Research, 94, 273–298. https://doi.org/10.1111/phpr.12260
Coeckelbergh, M. (2010). Moral appearances: Emotions, robots, and human morality. Ethics and Information Technology, 12, 235–241. https://doi.org/10.1007/s10676-010-9221-y
Coeckelbergh, M. (2014). The moral standing of machines: Towards a relational and non-Cartesian moral hermeneutics. Philosophy and Technology, 27, 61–77. https://doi.org/10.1007/s13347-013-0133-8
Coeckelbergh, M. (2020). Artifcial intelligence, responsibility attribution, and a relational justification of explainability. Science and Engineering Ethics, 26, 2051–2068. https://doi.org/10.1007/s11948-019-00146-8
Danaher, J. (2016). Robots, law and the retribution gap. Ethics and Information Technology, 18, 299–309. https://doi.org/10.1007/s10676-016-9403-3
Danaher, J. (2019). The philosophical case for robot friendship. Journal of Posthuman Studies, 3, 5–24. https://doi.org/10.5325/jpoststud.3.1.0005
Danaher, J. (2020). Welcoming robots into the moral circle: A defence of ethical behaviourism. Science and Engineering Ethics, 26, 2023–2049. https://doi.org/10.1007/s11948-019-00119-x
de Jong, R. (2020). The retribution–gap and responsibility–loci related to robots and automated technologies: A reply to Nyholm. Science and Engineering Ethics, 26, 727–735. https://doi.org/10.1007/s11948-019-00120-4
Frigo, G., Marthaler, F., Albers, A., Ott, S., & Hillerbrand, R. (2021). Training responsible engineers: Phronesis and the role of virtues in teaching engineering ethics. Australasian Journal of Engineering Education. https://doi.org/10.1080/22054952.2021.1889086
Frischmann, B., & Selinger, E. (2018). Re-engineering humanity. Cambridge: Cambridge University Press
Gordon, J. S., & Gunkel, D. J. (2022). Moral status and intelligent robots. Southern Journal of Philosophy, 60, 88–117. https://doi.org/10.1111/sjp.12450
Grosz, B. J., Grant, D. G., Vredenburgh, K., Behrends, J., Hu, L., Simmons, A., & Waldo, J. (2018). Embedded EthiCS: Integrating ethics broadly across computer science education. Communications of the ACM, 62, 54–61
Köhler, S. (2020). Instrumental robots. Science and Engineering Ethics, 26, 3121–3141. https://doi.org/10.1007/s11948-020-00259-5
Kraaijeveld, S. (2020). Debunking (the) retribution (gap). Science and Engineering Ethics, 26, 1315–1328. https://doi.org/10.1007/s11948-019-00148-6
Krueger, J., & Osler, L. (2019). Engineering affect: Emotion regulation, the internet, and techno-social niche. Philosophical Topics, 47, 205–231. https://doi.org/10.5840/philtopics201947223
Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology, 6, 175–183. https://doi.org/10.1007/s10676-004-3422-1
McGeer, V. (2014). Peter Strawson’s consequentialism. In D. Shoemaker, & N. A. Tognazzini (Eds.), Oxford studies in agency and responsibility, vol. 2 (pp. 64–92). Oxford: Oxford University Press
McKenna, M. (2012). Conversation and responsibility. Oxford: Oxford University Press
Nyholm, S. (2018). Attributing agency to automated systems: On human-robot collaborations and responsibility-loci. Science and Engineering Ethics, 24, 1201–1219. https://doi.org/10.1007/s11948-017-9943-x
Nyholm, S. (2020). Humans and robots: Ethics, agency, and anthropomorphism. London: Rowman & Littlefield
Pierrakos, O., Prentice, M., Silverglate, C., Lamb, M., Demaske, A., & Smout, R. (2019). Reimagining engineering ethics: From ethics education to character education. 2019 IEEE Frontiers in Education Conference Proceedings, 1–9
Rahwan, I. (2018). Society-in-the-loop: Programming the algorithmic social contract. Ethics and Information Technology, 20, 5–14. https://doi.org/10.1007/s10676-017-9430-8
de Santoni, F., & Mecacci, G. (2021). Four responsibility gaps with artificial intelligence: Why they matter and how to address them. Philosophy and Technology, 34, 1057–1084. https://doi.org/10.1007/s13347-021-00450-x
Shoemaker, D. (2015). Responsibility from the margins. Oxford: Oxford University Press
Shoemaker, D. (2017). Response-dependent responsibility; or, a funny thing happened on the way to blame. Philosophical Review, 126, 481–527. https://doi.org/10.1215/00318108-4173422
Shoemaker, D. (2019). Blameworthy but unblameable: A paradox of corporate responsibility. Georgetown Journal of Law and Public Policy, 17, 897–917
Shoemaker, D. (2022). Response-dependent theories of responsibility. In D. Nelkin & D. Pereboom (eds.) Oxford handbook of moral responsibility (pp. 304-320). Oxford: Oxford University Press.
Smith, A. (2015). Responsibility as answerability. Inquiry: An Interdisciplinary Journal of Philosophy, 58, 99–126. https://doi.org/10.1080/0020174X.2015.986851
Smith, N., & Vickers, D. (2021). Statistically responsible artificial intelligences. Ethics and Information Technology, 23, 483–493. https://doi.org/10.1007/s10676-021-09591-1
Strawson, P. F. (2008). Freedom and resentment. Freedom and resentment and other essays (pp. 1–28). New York: Routledge
Tigard, D. (2021). There is no techno-responsibility gap. Philosophy and Technology, 34, 589–607. https://doi.org/10.1007/s13347-020-00414-7
Vallor, S. (2016). Technology and the virtues: A philosophical guide to a future worth wanting. New York: Oxford University Press
Vargas, M. (2013). Building better beings. Oxford: Oxford University Press
Vargas, M. (2018). The social constitution of agency and responsibility: Oppression, politics, and moral ecology. In K. Hutchison, C. Mackenzie, & M. Oshana (Eds.), Social dimensions of moral responsibility (pp. 110–136). Oxford: Oxford University Press
Wallace, R. J. (1994). Responsibility and the moral sentiments. Cambridge: Cambridge University Press
Watson, G. (2004). Two faces of responsibility. Agency and answerability (pp. 260–288). Oxford: Oxford University Press
Watson, G. (2014). Peter Strawson on responsibility and sociality. In D. Shoemaker, & N. A. Tognazzini (Eds.), Oxford studies in agency and responsibility, vol. 2 (pp. 15–32). Oxford: Oxford University Press
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Sars, N. Engineering responsibility. Ethics Inf Technol 24, 32 (2022). https://doi.org/10.1007/s10676-022-09660-z
Accepted:
Published:
DOI: https://doi.org/10.1007/s10676-022-09660-z