ABSTRACT

Up until now, the question of how innovation policy deals with agency and power distributed between human and artificial intelligence has not been addressed conclusively so far. The systems failure approach often used to motivate and justify innovation policy broadly acknowledges and addresses problems stemming from the emergence and use of AI. Yet it insufficiently addresses three questions that make AI a game changer for innovation policy, i.e. (1) how to deal with ethical issues of using AI, (2) how AI-driven innovation policy can stimulate research processes leading to either exploitation or exploration, and (3) whether and how deep learning of AI might crowd-out human decisions. To solve these issues we suggest that innovation policy uses a visible hand in the context of AI, i.e. (1) to involve clearly legitimized stakeholders in the design of ethical guidelines – and avoid outsourcing this important task to expert councils, (2) to use policy measures that can distinguish between exploration and exploitation of AI, and (3) to employ a coordinated approach of involving stakeholders in several steps ensuring the implementation of their shared values in AI-driven decision processes. The invisible hand of innovation policy can neither rely on policy actions nor market relationships only but has to rely on their joint use.