Skip to main content

Advertisement

Log in

Society-in-the-loop: programming the algorithmic social contract

  • Original Paper
  • Published:
Ethics and Information Technology Aims and scope Submit manuscript

Abstract

Recent rapid advances in Artificial Intelligence (AI) and Machine Learning have raised many questions about the regulatory and governance mechanisms for autonomous machines. Many commentators, scholars, and policy-makers now call for ensuring that algorithms governing our lives are transparent, fair, and accountable. Here, I propose a conceptual framework for the regulation of AI and algorithmic systems. I argue that we need tools to program, debug and maintain an algorithmic social contract, a pact between various human stakeholders, mediated by machines. To achieve this, we can adapt the concept of human-in-the-loop (HITL) from the fields of modeling and simulation, and interactive machine learning. In particular, I propose an agenda I call society-in-the-loop (SITL), which combines the HITL control paradigm with mechanisms for negotiating the values of various stakeholders affected by AI systems, and monitoring compliance with the agreement. In short, ‘SITL = HITL + Social Contract.’

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  • Aldewereld, H., Dignum, V., & hua Tan, Y. (2014). Design for values in software development. In J. van den Jeroen, P. E. Vermaas, & I. van de Poel (Eds.), Handbook of ethics, values, and technological design. Dordrecht: Springer.

    Google Scholar 

  • Allen, J., Guinn, C. I., & Horvtz, E. (1999). Mixed-initiative interaction. IEEE Intelligent Systems and Their Applications, 14(5), 14–23.

    Article  Google Scholar 

  • Amershi, S., Cakmak, M., Knox, W. B., & Kulesza, T. (2014). Power to the people: The role of humans in interactive machine learning. AI Magazine, 35(4), 105–120.

    Article  Google Scholar 

  • Arrow, K. J. (2012). Social choice and individual values. New Haven: Yale University Press.

    MATH  Google Scholar 

  • Bakshy, E., Messing, S., & Adamic, L. A. (2015). Exposure to ideologically diverse news and opinion on facebook. Science, 348(6239), 1130–1132.

    Article  MathSciNet  MATH  Google Scholar 

  • Baldassarri, D., & Grossman, G. (2011). Centralized sanctioning and legitimate authority promote cooperation in humans. Proceedings of the National Academy of Sciences, 108(27), 11023–11027.

    Article  Google Scholar 

  • Berk, R., Heidari, H., Jabbari, S., Kearns, M., & Roth, A. (2017). Fairness in criminal justice risk assessments: The state of the art. arXiv preprint [arXiv:1703.09207].

  • Binmore, K. (2005). Natural justice. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Bonnefon, J.-F., Shariff, A., & Rahwan, I. (2016). The social dilemma of autonomous vehicles. Science, 352(6293), 1573–1576.

    Article  Google Scholar 

  • Boyd, D., & Crawford, K. (2012). Critical questions for big data: Provocations for a cultural, technological, and scholarly phenomenon. Information, Communication & Society, 15(5), 662–679.

    Article  Google Scholar 

  • Bozdag, E. (2013). Bias in algorithmic filtering and personalization. Ethics and Information Technology, 15(3), 209–227.

    Article  Google Scholar 

  • Cakmak, M., Chao, C., & Thomaz, A. L. (2010). Designing interactions for robot active learners. IEEE Transactions on Autonomous Mental Development, 2(2), 108–118.

    Article  Google Scholar 

  • Caliskan, A., Bryson, J. J., & Narayanan, A. (2017). Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334), 183–186.

    Article  Google Scholar 

  • Callejo, P., Cuevas, R., Cuevas, A., & Kotila, M. (2016). Independent auditing of online display advertising campaigns. In Proceedings of the 15th ACM workshop on hot topics in networks (HotNets) (pp. 120–126).

  • Castelfranchi, C. (2000). Artificial liars: Why computers will (necessarily) deceive us and each other. Ethics and Information Technology, 2(2), 113–119.

    Article  Google Scholar 

  • Chen, Y., Lai, J. K., Parkes, D. C., & Procaccia, A. D. (2013). Truth, justice, and cake cutting. Games and Economic Behavior, 77(1), 284–297.

    Article  MathSciNet  MATH  Google Scholar 

  • Citron, D. K., & Pasquale, F. A. (2014). The scored society: due process for automated predictions. Washington Law Review, 89, 1–33.

    Google Scholar 

  • Conitzer, V., Brill, M., & Freeman, R. (2015). Crowdsourcing societal tradeoffs. In Proceedings of the 2015 international conference on autonomous agents and multiagent systems (pp. 1213–1217). International Foundation for Autonomous Agents and Multiagent Systems.

  • Crandall, J. W. & Goodrich, M. A. (2001). Experiments in adjustable autonomy. In 2001 IEEE international conference on Systems, man, and cybernetics (Vol. 3, pp. 1624–1629). IEEE.

  • Cuzzillo, T. (2015). Real-world active learning: Applications and strategies for human-in-the-loop machine learning. Technical report, OâĂŹReilly.

  • Delvaux, M. (2016). Motion for a European Parliament resolution: with recommendations to the commission on civil law rules on robotics. Technical Report (2015/2103(INL)), European Commission.

  • Diakopoulos, N. (2015). Algorithmic accountability: Journalistic investigation of computational power structures. Digital Journalism, 3(3), 398–415.

    Article  Google Scholar 

  • Dinakar, K., Chen, J., Lieberman, H., Picard, R., & Filbin, R. (2015). Mixed-initiative real-time topic modeling & visualization for crisis counseling. In Proceedings of the 20th international conference on intelligent user interfaces (pp. 417–426). ACM.

  • Etzioni, A., & Etzioni, O. (2016). AI assisted ethics. Ethics and Information Technology, 18(2), 149–156.

    Article  Google Scholar 

  • Friedman, B. (1996). Value-sensitive design. Interactions, 3(6), 16–23.

    Article  Google Scholar 

  • Fukuyama, F. (2011). The origins of political order: From prehuman times to the French Revolution. London: Profile books.

    Google Scholar 

  • Gates, G., Ewing, J., Russell, K., & Watkins, D. (2015). How Volkswagen’s ‘defeat devices’ worked. New York Times.

  • Gauthier, D. (1986). Morals by agreement. Oxford: Oxford University Press.

    Google Scholar 

  • Gürerk, Ö., Irlenbusch, B., & Rockenbach, B. (2006). The competitive advantage of sanctioning institutions. Science, 312(5770), 108–111.

    Article  Google Scholar 

  • Hamilton, W. D. (1963). The evolution of altruistic behavior. American Naturalist, 97, 354–356.

    Article  Google Scholar 

  • Haviland, W., Prins, H., McBride, B., & Walrath, D. (2013). Cultural anthropology: The human challenge. Boston: Cengage Learning.

    Google Scholar 

  • Helbing, D., & Pournaras, E. (2015). Society: Build digital democracy. Nature, 527, 33–34.

    Article  Google Scholar 

  • Henrich, J. (2004). Cultural group selection, coevolutionary processes and large-scale cooperation. Journal of Economic Behavior & Organization, 53(1), 3–35.

    Article  Google Scholar 

  • Hern, A. (2016). ‘partnership on artificial intelligence’ formed by Google, Facebook, Amazon, IBM, Microsoft and Apple. Technical report, The Guardian.

  • Hobbes, T. (1651). Leviathan, or, The matter, forme, and power of a common-wealth ecclesiasticall and civill. London: Andrew Crooke.

    Google Scholar 

  • Horvitz, E. (1999). Principles of mixed-initiative user interfaces. In Proceedings of the SIGCHI conference on human factors in computing systems (pp. 159–166). ACM.

  • IEEE (2016). Ethically aligned design. Technical report, The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems.

  • Johnson, M., Bradshaw, J. M., Feltovich, P. J., Jonker, C. M., Van Riemsdijk, M. B., & Sierhuis, M. (2014). Coactive design: Designing support for interdependence in joint activity. Journal of Human-Robot Interaction, 3(1), 43–69.

    Article  Google Scholar 

  • Kleinberg, J., Mullainathan, S., & Raghavan, M. (2016). Inherent trade-offs in the fair determination of risk scores. arXiv preprint [arXiv:1609.05807].

  • Leben, D. (2017). A rawlsian algorithm for autonomous vehicles. Ethics and Information Technology, 19, 107–115.

    Article  Google Scholar 

  • LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436–444.

    Article  Google Scholar 

  • Letham, B., Rudin, C., McCormick, T. H., Madigan, D., et al. (2015). Interpretable classifiers using rules and bayesian analysis: Building a better stroke prediction model. The Annals of Applied Statistics, 9(3), 1350–1371.

    Article  MathSciNet  MATH  Google Scholar 

  • Levy, S. (2010). The AI revolution is on. Wired.

  • Lippmann, W. (1927). The phantom public. New Brunswick: Transaction Publishers.

    Google Scholar 

  • Littman, M. L. (2015). Reinforcement learning improves behaviour from evaluative feedback. Nature, 521(7553), 445–451.

    Article  Google Scholar 

  • Liu, B. (2012). Sentiment analysis and opinion mining. Synthesis Lectures on Human Language Technologies, 5(1), 1–167.

    Article  Google Scholar 

  • Locke, J. (1689). Two treatises of government. Self Published.

  • Markoff, J. (2015). Machines of loving grace. Ecco.

  • MIT (2017). The moral machine. Retrieved January 01, 2017, from http://moralmachine.mit.edu.

  • Moulin, H., Brandt, F., Conitzer, V., Endriss, U., Lang, J., & Procaccia, A. D. (2016). Handbook of Computational Social Choice. Cambridge: Cambridge University Press.

  • National Science and Technology Council Committee on Technology. (2016). Preparing for the future of artificial intelligence. Technical report, Executive Office of the President.

  • Nisan, N., Roughgarden, T., Tardos, E., & Vazirani, V. V. (2007). Algorithmic game theory (Vol. 1). Cambridge: Cambridge University Press.

  • Nowak, M., & Highfield, R. (2011). Supercooperators: Altruism, evolution, and why we need each other to succeed. New York: Simon and Schuster.

    Google Scholar 

  • O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. New York: Crown Publishing Group.

    Google Scholar 

  • O’Reilly, T. (2016). Open data and algorithmic regulation. In B. Goldstein & L. Dyson (Eds.), Beyond transparency: Open data and the future of civic innovation. San Francisco: Code for America Press.

    Google Scholar 

  • Orseau, L. & Armstrong, S. (2016). Safely interruptible agents. In Uncertainty in artificial intelligence: 32nd Conference (UAI).

  • Pariser, E. (2011). The filter bubble: What the internet is hiding from you. London: Penguin.

    Google Scholar 

  • Parkes, D. C., & Wellman, M. P. (2015). Economic reasoning and artificial intelligence. Science, 349(6245), 267–272.

    Article  MathSciNet  MATH  Google Scholar 

  • Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Cambridge: Harvard University Press.

    Book  Google Scholar 

  • Pentland, A. S. (2013). The data-driven society. Scientific American, 309(4), 78–83.

    Article  Google Scholar 

  • Pigou, A. C. (1920). The economics of welfare. London: Palgrave Macmillan.

    Google Scholar 

  • Rawls, J. (1971). A theory of justice. Cambridge: Harvard University Press.

    Google Scholar 

  • Richerson, P. J., & Boyd, R. (2005). Not by genes alone.

  • Rousseau, J.-J. (1762). The Social Contract.

  • Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., et al. (2015). Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3), 211–252.

    Article  MathSciNet  Google Scholar 

  • Scott, B. (2014). Visions of a techno-leviathan: The politics of the bitcoin blockchain. E-International Relations.

  • Sheridan, T. B. (1992). Telerobotics, automation, and human supervisory control. Cambridge: MIT Press.

    Google Scholar 

  • Sheridan, T. B. (2006). Supervisory control. Handbook of Human Factors and Ergonomics (3rd ed., pp. 1025–1052). Hoboken: Wiley

  • Sigmund, K., De Silva, H., Traulsen, A., & Hauert, C. (2010). Social learning promotes institutions for governing the commons. Nature, 466(7308), 861–863.

    Article  Google Scholar 

  • Skyrms, B. (2014). Evolution of the social contract. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Standing Committee of the One Hundred Year Study of Artificial Intelligence (2016). Artificial intelligence and life in 2030. Technical report, Stanford University.

  • Sweeney, L. (2013). Discrimination in online ad delivery. Queue, 11(3), 10.

    Article  Google Scholar 

  • Tambe, M., Scerri, P., & Pynadath, D. V. (2002). Adjustable autonomy for the real world. Journal of Artificial Intelligence Research, 17(1), 171–228.

    MathSciNet  MATH  Google Scholar 

  • Thomaz, A. L., & Breazeal, C. (2008). Teachable robots: Understanding human teaching behavior to build more effective robot learners. Artificial Intelligence, 172(6), 716–737.

    Article  Google Scholar 

  • Trivers, R. L. (1971). The evolution of reciprocal altruism. The Quarterly Review of Biology, 46, 35–57.

    Article  Google Scholar 

  • Tufekci, Z. (2015). Algorithmic harms beyond facebook and google: Emergent challenges of computational agency. Journal on Telecommunications and High Technology Law, 13, 203.

    Google Scholar 

  • Turchin, P. (2015). Ultrasociety: How 10,000 years of war made humans the greatest cooperators on earth. Chaplin: Beresta Books.

    Google Scholar 

  • Valentino-DeVries, J., Singer-Vine, J., & Soltani, A. (2012). Websites vary prices, deals based on users’ information. Wall Street Journal, 10, 60–68.

    Google Scholar 

  • Van de Poel, I. (2013). Translating values into design requirements. In Philosophy and engineering: Reflections on practice, principles and process (pp. 253–266). Dordrecht: Springer.

  • Young, H. P. (2001). Individual strategy and social structure: An evolutionary theory of institutions. Princeton: Princeton University Press.

    Google Scholar 

Download references

Acknowledgements

I am grateful for financial support from the Ethics & Governance of Artificial Intelligence Fund, as well as support from the Siegel Family Endowment. I am endebted to Joi Ito, Suelette Dreyfus, Cesar Hidalgo, Alex ‘Sandy’ Pentland, Tenzin Priyadarshi and Mark Staples for conversations and comments that helped shape this article. I’m grateful to Brett Scott for allowing me to appropriate the term ‘Techno-Leviathan’ which he originally presented in the context of Cryptocurrency (Scott 2014). I thank Deb Roy for introducing me to Walter Lippman’s ‘The Phantom Public’ and for constantly challenging my thinking. I thank Danny Hillis for pointing to the co-evolution of technology and societal values. I thank James Guszcza for suggesting the term ‘algorithm auditors’ and for other helpful comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Iyad Rahwan.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Rahwan, I. Society-in-the-loop: programming the algorithmic social contract. Ethics Inf Technol 20, 5–14 (2018). https://doi.org/10.1007/s10676-017-9430-8

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10676-017-9430-8

Keywords

Navigation