The adoption of Artificial Intelligence is steadily increasing, but the underlying algorithms have become so complex that they are no longer transparent. The EU has introduced some modest AI transparency requirements as part of its General Data Protection Regulation. However, two years after their introduction, the effectiveness of these rules remains questionable. Our aim is to contribute towards the further development of a governance framework for AI. We begin by explaining how the algorithms that enable the speed and data processing power of AI also obscure its transparency. We review how major guidelines on AI ethics operationalize algorithmic transparency, following which we assess whether these principles are adequately covered by the GDPR. We then present the results of semi-structured interviews of a heterogeneous sample of stakeholders of consumer information online (N=75). Our data provide evidence that the current implementation of the EU’s informed consumer paradigm fails to establish a satisfactory level of consumer protection and information online. If simple technological applications such as cookies remain non-transparent to consumers, the current approaches are entirely incapable of addressing the problem of complex AI applications. We conclude by formulating a policy proposal as to how the transparency of AI applications could be improved from the perspective of end users.