Abstract
Frontier technologies such as Big Data and Artificial Intelligence (AI) are hailed to improve decision-making by reducing and even mitigating human biases. The emergence and rapid adoption of these technologies, particularly in optimization of services and provision of key analytics and insights was justified by the widespread benefits of AI to democratize intelligent software for all. Yet, recent studies have brought to light cases where AI has perpetuated existing biases and deepened inequalities, contributing to the further marginalization of specific groups in society. Despite the opportunities that AI offers, it also poses new threats to human freedom, fairness, non-discrimination, privacy, and security; leaving questions regarding the human rights implications of AI unaddressed. This paper proposes the use of international legal frameworks such as the International Bill of International Human Rights (including the Universal Declaration of Human Rights) to assess the human rights impacts of AI system. To ground the discussion, we present a case study to assess the human rights implications of Apprise, a multi-lingual expert system for screening potential victims of human trafficking and forced labor, piloted in Thailand. Drawing on amplification theory, we highlight that AI systems are not deployed in neutral systems, and that pre-existing inequalities and “unfreedoms” can be aggravated if not addressed. We argue for a balanced view of the potential of AI systems, cognizant of both the positive and negative intentions of users of such technologies.
Keywords
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
- 2.
Note: the authors of this paper are the developers of Apprise, developed by the United Nations University Institute in Macau in partnership with The Mekong Club.
References
Bostrom, N.: Strategic implications of openness in AI development. Glob. Policy 8(2), 135–148 (2017). https://doi.org/10.1111/1758-5899.12403
National Science and Technology Council: Preparing for the Future of Artificial Intelligence. Committee on Technology, USA (2016)
Goldman Sachs Global Investment Research: The real consequences of artificial intelligence. Fortnightly Thoughts 85, 1–25 (2015)
Access Now: Human Rights in the Age of Artificial Intelligence. (2018). https://www.accessnow.org/cms/assets/uploads/2018/11/AI-and-Human-Rights.pdf
Razo, F.A., Hilligoss, H., Krishnamurthy, V., Bavitz, C., Kim, L.: Artificial Intelligence & Human Rights: Opportunities & Risks. Social Science Research Network, Rochester, NY, SSRN Scholarly Paper ID 3259344 (2018)
Angwin, J., Larson, J.: Machine Bias. ProPublica, New York (2016)
Fjeld, J., et al.: Principled Artificial Intelligence. Berkman Center for Internet & Society, Cambridge (2019)
Donahoe, E., Metzger, M.: Artificial intelligence and human rights. J. Democr. 30(2), 115–126 (2019)
Walk Free Foundation: Global Slavery Index. (2018)
Code 8.7: Code 8.7 Conference Report. UNU, New York, USA (2019)
Marr, B.: Artificial Intelligence Has a Problem with Bias. Here’s How to Tackle It. Forbes, Jersey City (2019)
Balkin, J.M.: The constitution in the national surveillance state. Minn. LAW Rev. 93, 26 (2008)
Boyd, D., Levy, K., Marwick, A.: The networked nature of algorithmic discrimination. In: Gangadharan, S.P., Eubanks, V., Barocas, S. (eds.) Data and Discrimination: Collected Essays, pp. 53–57. New America Foundation, Washington, DC (2014)
Perry, W. L., McInnis, B., Price, C.C., Smith, S., Hollywood, J.S.: Predictive Policing (2013). https://www.rand.org/pubs/rsearch_briefs/RB9735.html. Accessed 24 Jun 2019
Toyama, K.: Technology as amplifier in international development. In: Proceedings of the 2011 iConference, Seattle, Washington, pp. 75–82 (2011). https://doi.org/10.1145/1940761.1940772
Sen, A.: Development as Freedom. Oxford University Press, Oxford (1999)
Sen, A.: Inequality Re-examined. Harvard University Press, North America (1995)
Zheng, Y., Stahl, B.C.: Technology, capabilities and critical perspectives: What can critical theory contribute to Sen’s capability approach? Ethics Inf. Technol. 13(2), 69–80 (2011). https://doi.org/10.1007/s10676-011-9264-8
Deneulin, S.: Necessary thickening: Ricoeur’s ethic of justice as a complement to sen’s capability approach. In: Deneulin, S., Nebel, M., Sagovsky, N. (eds.) Transforming Unjust Structures the Capability Approach, pp. 27–45. Springer, Dordrecht (2006)
Latonero, M.: Governing Artificial Intelligence: Upholding Human Rights & Dignity. Data & Society, New York (2018)
Article 19: Governance with teeth: How human rights can strengthen FAT and ethics initiatives on artificial intelligence. Article 19, London, UK (2019)
Canca, C.: AI & Global Governance: Human Rights and AI Ethics - Why Ethics Cannot be Replaced by the UDHR’, UNU-CPR (2019)
OHCHR: International Bill of Human Rights
OHCHR: Human Rights and Human Trafficking. Fact Sheet 36, Geneva (2014)
ILO: Convention C029 - Forced Labor Convention 29, (1930)
Skȓivánková, K.: Between Decent Work and Forced Labor: Examining the Continuum of Exploitation. Joseph Rowntree Foundation, York (2010)
Hopper, E.: Invisible chains: Psychological coercion of human trafficking victims. Intercult. Hum. Rights Law Rev. 1, 185–209 (2006)
Simons, H.: Case Study Research in Practice. SAGE Publications, London (2009)
Yin, R.K.: Case Study Research: Design and Methods, 2nd Revised edn. SAGE Publications, Thousand Oaks (1994)
Thinyane, H.: Supporting the identification of victims of human trafficking and forced labor in thailand. In: Krauss, K., Turpin, M., Naude, F. (eds.) IDIA 2018. CCIS, vol. 933, pp. 61–74. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-11235-6_5
Thinyane, H., Bhat, K.: Supporting the critical-agency of victims of human trafficking in Thailand, presented at the ACM CHI Conference on Human Factors in Computing Systems, Glasgow, Scotland (2019)
EJF: Slavery at Sea: The Continued Plight of Trafficked Migrants in Thailand’s Fishing Industry’, Environmental Justice Foundation, London, UK (2014)
Kroll, J., et al.: Accountable Algorithms. Univ. Pa. Law Rev. 165(3), 633–705 (2017)
Amnesty International and Access Now: The Toronto Declaration: Protecting the right to equality and non-discrimination in machine learning systems (2018)
Turoff, M., Linstone, H.: The Delphi Method Techniques and Applications. Addison-Wesley, Reading (2002)
ILO: Hard to see, harder to count: survey guidelines to estimate forced labor of adults and children. Geneva (2012)
United Nations: Universal Declaration of Human Rights. (1948)
Rende Taylor, L., Shih, E.: Worker feedback technologies and combatting modern slavery in global supply chains. J. Br. Acad. 7, 131–165 (2019). https://doi.org/10.5871/jba/007s1.131
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Thinyane, H., Sassetti, F. (2020). Towards a Human Rights-Based Approach to AI: Case Study of Apprise. In: Junio, D., Koopman, C. (eds) Evolving Perspectives on ICTs in Global Souths. IDIA 2020. Communications in Computer and Information Science, vol 1236. Springer, Cham. https://doi.org/10.1007/978-3-030-52014-4_3
Download citation
DOI: https://doi.org/10.1007/978-3-030-52014-4_3
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-52013-7
Online ISBN: 978-3-030-52014-4
eBook Packages: Computer ScienceComputer Science (R0)