Since the birth of AI in 1956, various applications of the technology have left the lab and spread through society. Expert systems have been in widespread use for decades and the first neural networks entered the financial sector some time ago. Thus far, however, the impact has been modest due to the limited scope for utilizing such forms of AI.

That picture is now changing. As AI has gathered momentum, many applications have started to appear throughout society and the economy. As explained in the previous chapter, AI’s acceleration is driven by scientific advances coupled with increasing computational power and data availability. This chapter considers how AI is making its presence felt in society. We begin by identifying a set of indicators that demonstrate the momentum it now has – ranging from publications and patents to investment and employment. We then discuss the various types of AI currently in use, including image recognition, speech recognition and robotics. That analysis reveals just how widely AI applications are now distributed in many countries. We go on to describe how, largely as a result of AI’s entry into society, the technology has become the subject of public debate. Finally, we look at the future of the laboratory. AI may have moved from lab to society, but it remains a technology heavily reliant on fundamental research.

1 Momentum from Lab to Society

1.1 Scientific Activity

AI’s definitive and wide-ranging transition from the research laboratory into everyday settings started gathering momentum in about 2010. That movement was preceded by an upsurge of scientific activity. The World Intellectual Property Organization has released a study showing a considerable increase in the number of AI-related publications over the past 20 years: up an average of 8% annually between 1996 and 2001, rising to 18% between 2002 and 2007.Footnote 1 After 2015 annual growth surged again to 23%, and in 2018 AI-related papers accounted for 2–3% of all published articles worldwideFootnote 2 –almost three times the proportion in the late 1990s.

1.2 Practical Potential

In that same period, the deep learning-based advances in speech and image recognition referred to in the previous chapter opened the door to a wide variety of potential practical applications. We also see a marked rise in the number of AI-related patents granted: the average annual increase was 8% between 2006 and 2011, but 28% in the years 2012–2017.Footnote 3 AI’s share of all new patents jumped in the last two of those years from less than 1.5% to nearly 2.5%.Footnote 4 Half of all AI inventions ever patented date from 2013 to 2018.Footnote 5 In short, the surge in academic activity since the early 2010s has been accompanied by a wave of AI patents.

Looking more closely at the patent grants, we see that growth has been greatest in the domain of machine learning. Some 40% of all AI patents refer to that technology. Within this domain, deep learning has been the fastest-growing discipline with patent grants increasing by 175% between 2013 and 2016.Footnote 6 Zooming in on the fields of application discussed in the next section, image processing or computer vision is the most prominent, accounting for about half of all AI patents in the period.Footnote 7 In other words, a great deal of innovation is taking place in AI. The increasing importance of practical applications is also apparent from the software development data: since 2014 the amount of AI-related open-source software (OSS) has increased at three times the pace of other forms of OSS.Footnote 8

1.3 Rising Investment: AI Is Becoming a Business

The growth in patent grants reflects the business community’s increasing interest in AI. From about 2010 onwards, companies such as Google, IBM and Microsoft began working with neural networks for speech recognition. Google has been using these networks on Android smartphones since 2012. The use of computer vision by big technology companies has been on a similar upward trajectory. In 2014 Google acquired the British company DeepMind, a global leader in AI research with many ‘firsts’ to its name, including the first AI go victory over a human champion, Lee Sedol.

Enhanced AI language capability has been deployed in Google Translate since 2016,Footnote 9 and in 2017 Intel spent €14 billion to acquire the Israeli company Mobileye, a specialist in driver assistance and autonomous driving systems. Facebook, Amazon, Apple, Microsoft and other hardware and software companies have also been acquiring AI start-ups in recent years to boost their capabilities in this field. Whereas barely ten such acquisitions were registered in 2010, there were more than 240 in 2019.Footnote 10

Major tech corporations have also been recruiting prominent AI scientists. Geoffrey Hinton joined Google, Yann LeCun went to Facebook and Andrew Ng has worked for both Google and the Chinese company Baidu. In their public statements, the executives heading up such companies have explicitly stated their interest in AI. In a 2016 letter to shareholders, Amazon’s Jeff Bezos wrote that machine learning was crucial to improving core operations. The following year Google CEO Sundar Pichai delivered a speech announcing that the firm was moving from a ‘mobile-first world’ to an ‘AI-first world’.Footnote 11 Similarly, Microsoft’s Satya Nadella wrote to company personnel in 2018 setting out organizational changes linked to the reallocation of resources to the cloud (online storage) and AI.Footnote 12 Chinese tech giants such as Baidu, Tencent and Alibaba have also been saying for years – in some cases before their American counterpartsFootnote 13 – that AI is central to their business strategies. For example, the first research centre Alibaba ever opened outside China was an AI-focused facility in Singapore.

Commercial interest is not confined to the ‘big tech’ sector. The business landscape includes a wide range of young companies with AI at the heart of their operations. They include China’s ByteDance and Face++, US firms Airbnb, Shazam and Tesla, Israel’s Waze and the Europe-based Spotify and Booking.com. It is the European Commission’s stated ambition that three out of every four companies should be using AI by 2030.Footnote 14

Global investment in AI start-ups has been increasing steadily for some years. Researchers at Stanford University estimated total private investment in this segment at US$40 billion dollars in 2018, up from $1.3 billion in 2010. During that period, investment increased by an average of nearly 50% a year.Footnote 15 Although quantitative investment estimates vary, depending on the definitions and methodologies used, the upward trend is unmistakable. Like the total amount invested, the number of investments also increased: from 200 in 2011 to 1400 in 2017. Based on those trends, the OECD has concluded that investors are recognizing the potential of AI.Footnote 16

Taking a broader view, Stanford University estimates that total investment in AI businesses was nearly US$70 billion dollars in 2020Footnote 17 – five times as much as in 2015. Between 2015 and 2020, therefore, AI firms around the world received a huge injection of funds. In recent years 60% of all AI investment has gone into machine learning.Footnote 18 For a long time the bulk of that was directed towards the development of autonomous vehicles, in line with the focus on computer vision referred to above.Footnote 19 In 2018 they accounted for 30% of the capital invested in AI start-ups, with the number of businesses testing such vehicles in California increasing sevenfold. In 2020, however, the COVID-19 pandemic brought about a realignment, with the healthcare and pharmaceutical sectors now attracting the lion’s share of investment.Footnote 20

1.4 Economic and Employment Impact

Various consultancy firms have made predictions about the implications of AI’s definitive entry into society. They envisage that, because of its generic nature, the technology will influence almost all business sectors and have considerable economic impact. In 2017, for instance, PwC forecast that AI could be contributing as much as US$15.7 trillion to the world economy by 2030.Footnote 21 The same report identified healthcare, automotive manufacturing, financial services, transport and logistics, ICT, media and retail as the sectors where the impact would be greatest. Deloitte also foresees AI’s commercial importance increasing rapidly and suggests that the window of opportunity for a business to gain a competitive advantage from it is very narrow. Firms need to involve themselves quickly if they do not want to miss the boat.Footnote 22 In a 2018 report McKinsey predicted that 70% of the world’s businesses would make use of AI and that the technology had the potential to boost global gross domestic product (GDP) by 1.2% a year.Footnote 23 More recently, McKinsey analysed AI’s economic potential for a number of countries identified as Europe’s ‘digital leaders’. If they succeed in adopting AI and pursue sound investment strategies, the analysts say, GDP growth could increase by 1.4% a year.Footnote 24

Meanwhile, US researchers have demonstrated that AI’s entry into society can also boost employment. The number of available AI-related positions went up from 0.3% of all US vacancies in 2012 to 0.8% in 2019. Having stood at 0.26% in 2010, the proportion of jobs accounted for by AI-related roles reached 1.32% in 2019.Footnote 25 Moreover, AI has become one of the most popular fields of study for postgraduate researchers in computer science in North America. In 2010 the proportion of PhD graduates in AI taking jobs in industry was about the same as the percentage going into academia. Since then, though, the balance has shifted: in 2019 more than half went on to take industry jobs, while fewer than a quarter followed academic careers.Footnote 26 According to technology expert Tim O’Reilly, ‘data scientist’ is now the most coveted job title in Silicon Valley. The McKinsey Global Institute estimates that, in 2018, the US already had between 140,000 and 190,000 fewer machine learning experts than it needed.Footnote 27

1.5 Governments Are Also Focusing on AI

It is not only through commercial activities and private-sector applications that AI is entering society; a wide variety of public organizations are also contributing towards the transition. Police services use the technology to investigate and fight crime, social security agencies use it for fraud detection and various AI-based control initiatives were launched during the COVID-19 pandemic. Although no global historical overview is available, the European Commission estimates that roughly 230 public-sector AI applications were in use in 2019.Footnote 28 It seems very likely that the actual number was higher; in the Netherlands alone, 74 public-service projects were making use of AI that year.Footnote 29

Further evidence of AI’s societal traction is provided by the growing number of national AI strategies being produced. Once it became clear that AI had reached the point where various practical applications were in the offing and the business community was investing heavily, many governments began developing strategies to reap the associated benefits. First came the Pan-Canadian Artificial Intelligence Strategy in March 2017, in which the Ottawa government announced plans to invest C$125 million in AI. Singapore, Japan and the United Arab Emirates followed suit later that year. China then published the New Generation Artificial Intelligence Development Plan, setting out its ambition to be the absolute global leader in AI by 2030. Soon afterwards strategies were presented by Finland, the US, France, the UK, Germany and other countries. As part of its commitment to ‘a Europe that is ready for the digital age’, the EC also began a number of AI-related programmes accompanied by a European Action Plan for AIFootnote 30 and a data strategy.Footnote 31 Since then, dozens of nations have produced action plans for utilizing AI, including less obvious countries such as Kenya, India and Mexico.Footnote 32 The flow of publications hit a peak in 2019 when twenty national AI strategies appeared; a total of about sixty are now in circulation. There is also one international AI strategy: the EU’s Co-ordinated Plan on Artificial Intelligence (2018).Footnote 33

Following the acceleration of AI development from around 2000 onwards, it is apparent from the increasing number of patent grants, the growing level of private investment, the appearance of new business models, the growth of AI-related employment and the publication of national strategies that we have reached a new chapter in the history of AI: the technology is entering society. Figure 3.1 illustrates this progress using the indicators referred to above. It is therefore pertinent to ask what mechanisms are at work here and what forms is AI taking in society. We address those questions in the next section.

Fig. 3.1
A graph of increase in A I activity against years, from 1980 to 2020. Eventually, A I activities have increased along the years.

AI gathers momentum outside the lab

Key Points – Momentum from Lab to Society

  • Since the 2010s, AI’s migration from lab to society has gained momentum. Advances made in the laboratory provide a springboard for practical application of the technology.

  • One reflection of AI’s new practical potential is an increasing number of patent grants. Half of all patented AI inventions were registered between 2013 and 2018.

  • Big tech companies are openly committed to AI, new businesses are springing up with AI at the heart of their operations and private investment in AI is increasing substantially throughout the world.

  • Because of its generic nature, AI is expected to have a major economic impact. Demand for AI experts is growing in the jobs market, while more and more PhD graduates in the subject are finding employment in the commercial sector.

  • Governments are also turning their attention to this theme: more than sixty countries have now developed national AI strategies.

2 The Practical Application of AI

AI has thus made the transition from the lab to society. As a result, we nowadays encounter all kinds of applications of the technology in our everyday lives: chatbots, smart cameras, translation apps, recommendation systems, risk analyses, driving systems and so on. In practice, AI takes many different forms which may be divided into several broad groups based on the type of task performed. Within the discipline, various classification systems are used. For the purpose of this overview, we distinguish five types of AI: applications for predictive analysis (machine learning), for image processing (computer vision), for language (natural language processing) and speech (speech recognition) and for the performance of physical tasks (robotics). All of these are already visible around us. Figure 3.2 provides an overview of the five types, which are considered individually below.

Fig. 3.2
An illustration depicts types of A I, natural language processing, computer vision, machine learning, robotics, and speech recognition.

Five types of AI in practical use

2.1 Machine Learning

The most common type of AI is machine learning. That can be slightly confusing, because the same term is also used for the form of technology currently dominant within AI. In this case, however, ‘machine learning’ refers to a particular type of application for predictive or advanced analytics, which is used to identify patterns in datasets as a basis for making predictions. Although machine learning technology can be used in other types of AI as well, this form is characterized by prediction being the primary task. It could thus also be referred to as ‘predictive systems’.

The ability to use data to make better-informed estimates about the future has huge potential value in many different contexts. The organization of energy supplies is a good example. Google’s DeepMind has developed an AI system that uses weather forecasts and turbine data to predict the inflow of energy from wind farms 36 h in advance.Footnote 34 Optimum use can then be made of wind power, despite the variability of the elements.

Because risk forecasting has always played an important role in financial services, machine learning is now widely used in that sector. Examples include AI-based credit rating, where a person’s creditworthiness is predicted based on their credit history and personal data.

Machine learning is also used for fraud prevention. For instance, Mastercard uses a system called Decision Intelligence to detect abnormal, potentially fraudulent activity by analysing transaction patterns.Footnote 35 There are also AI applications for customers, including systems that predict financial trends to help inform investment decisions.Footnote 36 Like banks and insurers, local authorities and police forces are looking into machine learning in the fight against fraud and other forms of crime. The UK’s Department of Work and Pensions uses AI to assess benefit claims and estimate the probability of fraud, for example.Footnote 37 In the Netherlands, the Ministry of Social Affairs and Employment and the country’s local authorities introduced System Risk Indication (SyRI) to tackle benefit fraud. This approach proved controversial, however, and was ultimately deemed unlawful by the courts. Meanwhile, some Dutch local authorities use AI to predict which of their residents are liable to fall into debt and may therefore require assistance.

AI is also deployed in police work, in the form of prediction systems. Many examples of ‘predictive policing’ can be found in the US, where AI is used, for example, to assess the risk of reoffending. Forces in other countries are investigating the scope for using machine learning to provide intelligence. For instance, the Dutch police have a Crime Anticipation System (CAS) that detects patterns of criminality and predicts where and when robberies are most likely to occur. Based on this output, surveillance and preventive activities can be tailored to the anticipated risk. Almost all of the 168 police districts in the Netherlands are currently using a version of CAS.Footnote 38 Furthermore, the Dutch police experiment with machine learning to predict which cold cases have the highest chance of a breakthrough and are therefore worthy of further investigation.Footnote 39

AI’s accurate predictive capabilities can be valuable in other sectors as well. Some supermarket chains have announced plans to experiment with dynamic pricing as a tool for minimizing waste and maximizing income. They could also use machine learning for product-range optimization or automated discounting. This would involve an algorithm analysing data on product shelf-life, outlet location, weather conditions and historical sales patterns to make predictions.Footnote 40

In the media industry, machine learning helps tailor products and services to consumers’ wishes. The most familiar examples are platform services like Netflix, YouTube and Spotify, which use AI to make relevant recommendations based on users’ previous choices. Predictive technologies of this kind are known as ‘recommender systems’. Machine learning-aided personalization has become an important pillar of e-commerce as well. Online retailers like Amazon, Alibaba and Zalando use AI to compile user profiles and adapt their marketing accordingly.

Similarly, advertising can be aligned with the interests and sensibilities of individual users. Known as microtargeting, this technique lends itself not only to commercial applications but also to political ends. Political microtargeting made waves around the world when it became known that the company Cambridge Analytica had used Facebook data to disseminate personalised advertising during the US presidential election and the UK’s Brexit referendum in 2016 (see Box 3.1). However, microtargeting is a widespread phenomenon that has been occurring in many countries for quite some time.Footnote 41 Investigative journalists have found that almost all political parties in the Netherlands engage in bespoke online messaging.Footnote 42

Box 3.1: Cambridge Analytica and Microtargeting

Microtargeting is directing particular messages at particular people. AI is used for ‘psychographic profiling’, so that the content shown to an individual is tailored to their personal profile, thus (supposedly) maximizing its effectiveness. The best-known examples of the dark side of this technique involve the data-mining firm Cambridge Analytica, which was closely associated with the 2016 Trump and pro-Brexit campaigns. Machine learning was applied to huge volumes of data on people’s online behaviour to build an understanding of public thinking, thus enabling targeted messaging on Facebook and other platforms to influence the way individuals voted. Cambridge Analytica has since ceased trading, but predictive systems of a new type are now being developed: ‘multi-agent artificial intelligence’ (MAAI). It is claimed that these can predict behaviour even more accurately, opening the way for more precise influence by putting targeting strategies to the test in simulated communities.Footnote 43

2.2 Computer Vision

Our second main type of AI relates to image recognition, also known as computer vision. This is about automating the observation, analysis and interpretation of visual information. That may be in the form of photographs, videos or live input from the physical world. Its development has been accelerated by the increasing availability of digital imagery. Social media and smartphones have facilitated a veritable explosion of images, some publicly available, which can be used to train computer vision algorithms. Indeed, we now communicate increasingly through images – ‘If there isn’t a pic, it didn’t happen!’ Since Instagram was launched, users have uploaded about 50 billion photos, while 350 million photos a day are posted on Facebook and 500 h of video material are added to YouTube every minute.Footnote 44

One of the best-known applications of computer vision is facial recognition. Moving beyond the mere detection of a face in an image by a computer, this entails the computer actually identifying whose it is. Camera input is analysed and features such as chin proportions, eye separation and cheek roundness are measured with millimetre-level accuracy. The computer the translates this data into a code representing the unique characteristics of a face, enabling it to be recognized when next encountered.

Facial recognition software is built into some smartphones, enabling users to unlock their phones simply by looking into the camera – in other words, to use their face like a password. Various apps use the technique in a similar way so that, for example, a PIN is not needed to authorize a payment.

China leads the world in the state use of facial recognition. The technology is widely deployed by the police there and for the surveillance of urban public spaces.Footnote 45 Many US government organizations, including the police, investigative agencies and border forces, use facial recognition as well.Footnote 46 Although currently controversial in Europe, most countries here are experimenting with the technology for use in airports, stadiums, schools and casinos as well as law enforcement.Footnote 47 According to AlgorithmWatch, a research and lobby organization concerned with algorithms and AI, facial recognition is used by police forces in at least eleven European nations.Footnote 48 However, the EU plans to introduce strict controls on its deployment in public places; the recently proposed Artificial Intelligence Act would prohibit such use except where strong grounds exist in its favour.Footnote 49

Facial recognition is by no means the only application of computer vision. It is also crucial for self-driving vehicles. Autonomous and semi-autonomous cars currently under development by Tesla, BMW, Volvo, Audi and Uber are equipped with multiple cameras that scan the surrounding space and recognize objects, road markings, traffic signs and traffic lights. Other applications of computer vision are intended primarily for monitoring of the physical environment. Examples include the detailed inspection of roads, bridges and machines with a view to facilitating prompt maintenance and the automated detection of vehicles and objects. In Amsterdam, for instance, cameras read the number plates of vehicles entering the city’s low emissions zone and the details of any not entitled to be there are sent to the agency responsible for issuing and collecting traffic fines. During the COVID-19 pandemic, computer vision has been utilized in various countries to scan public spaces for people who might not be respecting the rules on social distancing.Footnote 50

As well as lending itself to applications in public space, computer vision has great potential for the agricultural and livestock sectors and the food industry. It can be used to monitor and harvest crops, for example, and also play an important role in so-called ‘precision agriculture’.Footnote 51 Computer vision is suitable for animal-welfare applications, too, with cameras used to monitor behaviour.Footnote 52 Dutch start-up OneThird has developed a fruit-and-vegetable scanner that can accurately estimate their remaining shelf life by means of image recognition.Footnote 53 Such information facilitates better decision-making and thus helps minimize waste. When a consignment of tomatoes, say, arrives at a distribution centre, the decision might be taken to send them for immediate processing because it is possible to see that they will be unsaleable by the time they reach the shops.

Although progress is being made in this field, the applications of computer vision are still often limited to specific tasks in specific domains. In clinical medicine it has proven relatively successful in the form of ‘image-based diagnostics’Footnote 54: images are scanned for particular irregularities that could indicate a disorder, helping radiologists, dermatologists and pathologists to detect and diagnose illness.Footnote 55 Such successes have been aided by healthcare being a data-rich sector, and much of that data being visual, so there is ample material to train the algorithms.

Computer vision also has the potential to improve the quality of medical imagery and help surgeons perform operations. The US Food and Drug Administration (FDA, the agency responsible for regulating medical devices) has recently approved ten diagnostic tools based on the technology for use in hospitals.Footnote 56 Computer vision-enabled apps have also been developed so that people can check themselves for health issues, in most cases skin conditions; one is the Dutch SkinVision utility, another Google’s recently unveiled Derm Assist. Users scan their own skin and are then given advice on any follow-up that may be appropriate.Footnote 57 Although the medical world has been quite critical of such apps,Footnote 58 the examples we give do illustrate how computer vision can be utilized in practice.

2.3 Natural Language Processing

Our third general type of AI application automates the reading, analysis and generation of human language. The ‘holy grail’ of natural language processing is algorithms that can understand human language well enough to perform tasks requiring the interpretation of text. Language processing algorithms dissect sentences in various ways; for example, by distinguishing letters and words, labelling text elements and reading both left-to-right and right-to-left. This enables inferences to be made regarding the meaning of the text. Like computer vision, natural language processing has undergone a period of accelerated development in recent years, driven by advances in deep learning. Supported by sophisticated learning technology, the models can now be trained to understand human language more quickly and easily.

Because language is central to the way we communicate and how we gather, record and transfer knowledge, the potential applications of sophisticated natural language processing are enormous. In another parallel with computer vision, though, current systems are limited to specific tasks that require relatively little actual understanding of text input. Examples include tools that auto-correct, auto-complete or check text as it is typed, as well as automated translation systems like Google Translate.Footnote 59 Spam filters and search engines also make use of natural language processing. Google’s search algorithm, for instance, applies two techniques when processing each query. First it links the words entered to relevant words in documents. The algorithm then ranks the various documents containing the words in question on the basis of assumed quality and relevance, as determined from the number of previous clicks on the page – a process known as ‘page ranking’. This application of natural language processing has revolutionized the way we find information online. But it does not involve any true understanding of human language.

Another example is ‘messenger bots’, the automated chat systems that many organizations use for website-based customer support. Here AI helps provide customers with prompt, efficient assistance. In such applications, language processing is actually combined with expert systems: the algorithm analyses a question and, using a decision tree, selects the most appropriate reply or follow-up question. The Dutch police use such a chatbot to help people report internet fraud online; it checks that the report is complete, makes a preliminary appraisal of the case and advises the victim as to their best course of follow-up action.

2.4 Speech Recognition

Speech recognition is the AI domain concerned with the detection, analysis and interpretation of spoken human language. It involves the use of algorithms to distinguish words and sentences in spoken language and convert them to text – speech-to-text translation. One field in which this kind of application is being tested is healthcare, where AI systems transcribe discussions between doctors and patients.Footnote 60 A natural language processing tool then analyses the result, identifies important clinical information and produces a summary of the consultation – the aim being to reduce doctors’ administrative workload and thus ultimately yield better consultation reports.Footnote 61 The same technology can also work in reverse, converting text into speech. As, for example, when a device reads an e-book out loud, or a speech computer acts as a voice for someone who cannot speak or has difficulty doing so (such as a patient with motor neurone disease).Footnote 62

Voice-controlled smart assistants like Apple’s Siri, Google Assistant, Microsoft’s Cortana and Amazon’s Alexa combine the two technologies described above to enable spoken communication between human and computer. After responding to ‘wake words’ such as ‘Siri’ or ‘Alexa’, the tools are able to perform all sorts of tasks: searching the internet, compiling to-do lists, playing music, making restaurant reservations and so on. All the user has to do is give a clear spoken command. Speech recognition technology converts their speech into text, then natural language processing interprets the written information and determines what action is required.

Unlike people, who could speak and listen before they invented writing, computers find written language easier to process than the spoken word. Speech recognition is considerably more difficult because of the variability of spoken language and the noise in audio streams; picking out the words, identifying them and converting them into a type of text the computer can process is extremely challenging. Nor is interpretation of the speech signals themselves straightforward. When we speak, the sounds we make are not separated into distinct words. What a computer hears is very like what a person hears when listening to a language they are totally unfamiliar with: a continuous stream of sound, with individual words very hard to distinguish. Yet telling them apart is essential if we are ever to translate those words into a language we understand.

The problem posed by speech recognition thus differs fundamentally from the interpretation of written language or images. Unlike computer vision and natural language processing, speech recognition involves the processing of a single input variable – sound waves – that changes dynamically over time. The great challenge is distinguishing words and sentences within this input, so that they can be translated into a language the algorithm is able to process.

A further challenge is that some of the meaning of speech is conveyed by changes in volume, cadence and tone – the characteristics of spoken language. Effective interpretation therefore depends on more than simply distinguishing words from one another. The phonetic aspects need to be detected and interpreted as well, in order to determine the meaning of what is being said. Another stumbling block is homophones: words that sound the same but mean different things, such as ‘hour’ and ‘our’ or ‘air’ and ‘heir’. Their interpretation depends on the context: both the narrow context of the sentence and the wider context of the situation, the speaker and so on.

As in other domains, advances in machine learning have led to progress in the field of speech recognition since it has become possible to process much greater volumes of speech data to train the algorithms. Relatively successful practical applications of speech-to-text and text-to-speech conversion are now viable, providing that the speech is clear in both auditory and content terms. However, much spoken communication is unclear in one or both of these respects. Consequently, speech recognition technology has not yet reached the stage where it can be used reliably on a wide scale and for a range of purposes. To a large extent this is attributable to the limitations of natural language processing and AI’s ability to truly understand language. Although it has made the transition from lab to society, AI is still far from being a mature technology – a point we return to in the final section of this chapter.

2.5 Robotics

In this report the term ‘robotics’ is applied to the type of AI used in combination with robots. Robotics brings together all types of AI: the ability to reason and learn, to see and hear, to communicate and to understand. However, it differs from other AI disciplines in that it additionally involves physical processing: the ability to manipulate objects.

A robot needs to be able to move and undertake physical actions to perform tasks. Those may be so-called ‘dull, dirty, dangerous and dear jobs’ or activities in which robots can outperform people. Examples include space exploration, the clean-up operations following the nuclear accident at Fukushima and defusing bombs.Footnote 63 However, robotics are important as well in the context of innovations in healthcare, retailing, manufacturing, livestock husbandry, agriculture and horticulture. Autonomous vehicles can also be regarded as a form of robotics. Robots thus come in countless shapes and sizes, making a precise definition of the word very difficult. Joseph Engelberger, a pioneer in the field of industrial robotics, addressed that challenge with a variation on the classic one-liner often used in respect of familiar but undefinable things: “I can’t define a robot, but I know one when I see one.”

In classic robotics, expert systems play an important role. They are particularly suitable for standardized tasks in situations where a choice needs to be made from a number of predefined courses of action. For that reason, robots are currently used mainly in controlled manufacturing and port environments. Deploying them in highly dynamic and often chaotic everyday human settings, such as on the roads, involves far more complex challenges. Coping with the variety and spontaneity of such situations requires a degree of understanding of how the world works; the robot needs to be able to observe its surroundings, assess situations, predict plausible future scenarios and decide, in a dynamic setting, which of all the possible courses of action is most appropriate for the circumstances.Footnote 64 A system flexible enough to operate in the world outside the lab must therefore be underpinned by such understanding.

Because a robot of this kind would have to cope with the open-ended nature of our world, in which the possibilities are endless, it is crucial that it incorporate a wide range of capabilities. At present, though, the limitations of the other forms of AI effectively restrict the practical potential of this kind of robotics. For example, robots currently find it difficult to pick up a dark handkerchief from a dark table of their own accord, because computer vision is not yet sufficiently sensitive to light. Progress in the other branches of AI and advances in machine learning are therefore vital to the further development of robotics. Although the hardware and human control aspects are already quite impressive, everyday tasks require extremely refined motor control, planning and perceptual capability. While a commercial glasshouse may seem an orderly environment, for instance, it is still extremely difficult for a robot to pick a tomato without squashing it.

Three big players in the field of robotics are Boston Dynamics, which specializes in the simulation of human movement by robots (‘humanoids’), DJI, a specialist manufacturer of drones for consumer use, and Amazon Robotics, with a focus on automated logistics. Amazon develops and deploys robots capable of efficiently navigating large warehouses and thus optimizing sorting processes. To do that the machines have to make allowances for and co-operate with one another, which they manage very successfully in facilities they are deployed in. Amazon’s sorting robots perform specific tasks and operate in environments that are predictable and surveyable – for robots, at least.

By contrast, Boston Dynamics is aiming to develop robots that are far more flexible both physically and ‘mentally’, enabling them to be used for a variety of purposes. Previously owned by Alphabet (Google) but sold to Japanese technology giant SoftBank in 2017, Boston Dynamics is well known from its impressive video footage of two and four-legged robots such as Atlas and BigDog. They can stand and move around in ways that closely resemble the locomotion of people and animals. However, the company has yet to develop any commercial products. But Chinese technology firm DJI does already operate commercially; in fact, it is the global market leader in unmanned aircraft (drones) for aerial photography and video applications.

As the real-world examples above illustrate, AI is making its presence felt in society through robotics, speech recognition, natural language processing, computer vision and machine learning. Indeed, its practical applications are now so numerous and varied that it is impossible to compile a comprehensive overview. Nevertheless, the appendix to this report lists examples of AI applications in various sectors of the economy, primarily to provide an impression of their huge breadth and diversity. The simple observation that AI is today utilized in many different ways and contexts emphasizes the extent to which it is now becoming established within society. That process is not inconsequential. In the next section we consider the societal dynamics set in motion by AI’s transition from the lab to society.

Key Points – The Practical Application of AI

  • Having made the transition from lab to society, AI now has a variety of practical applications. We distinguish five general types of AI in everyday use.

  • Machine learning: AI for predictive analysis. One familiar example is the recommender systems that personalize internet content suggestions.

  • Computer vision: AI for the observation and analysis of visual information, such as recognizing faces or road signs.

  • Natural language processing: AI for the interpretation of everyday human language. Chatbots use this technology, for example.

  • Speech recognition: AI for spoken language processing. Voice-controlled assistants, such as Apple’s Siri and Amazon’s Alexa use this type of AI.

  • Robotics: the combination of various AI capabilities with physical functionality. Examples include robots that transport goods inside warehouses.

3 AI as a Phenomenon in Society

AI’s transformation from something researchers investigate into something used in everyday life has clear repercussions for society as a whole. Moving out of the lab inevitably implies moving into the public arena. The world AI has now entered is one of divergent interests and forces, and its arrival there has triggered investment, experimentation, discussion and alarm. Visions and strategies to utilize AI to maximum effect have been published, but also open letters and reports calling for its regulation. In short, the appearance of this new technology is making waves within society. As well as evolving technologically, AI is developing as a societal phenomenon (see Fig. 3.3). Although that process is still very much in progress, we can already discern a number of trends. To aid understanding of the current situation, in this section we consider society’s various responses to the arrival of AI and the shifting emphasis in them.

Fig. 3.3
Different shapes represent National A I-strategies, ethical guidelines, rise of A I labs, Interdisciplinary institutes, movies and books, and scientific breakthroughs. These are illustrated on a time line, from 1985 to 2025.

AI’s development as a societal phenomenon, per selected indicators

3.1 Interest in AI as a Revolutionary Technology

The scientific blossoming of artificial intelligence after the most recent ‘AI winter’ focused attention on the technology’s countless potential applications. That led to the appearance of several iconic books on its future, which often present the latest advances as the beginning of a new era. Visionary Ray Kurweil speculates on an imminent ‘singularity’ in which human and computer intelligence merge to form a single superintelligent entity.Footnote 65 Scientists Erik Brynjolfsson and Andrew McAfee place AI at the heart of ‘the second machine age’, in which machines relieve humans not only of physical tasks but also cognitive ones.Footnote 66 Philosopher Nick Bostrom views such developments as a serious threat, however: as AI becomes cleverer and faster than us, it becomes hard for humanity to maintain control over it.Footnote 67 Another philosopher, Luciano Floridi, refers to a ‘fourth revolution’ in which digital technologies like AI fundamentally change our world view and our understanding of ourselves.Footnote 68 Klaus Schwab, financial backer and chair of the World Economic Forum, talks of a ‘fourth industrial revolution’ when the application of smart technology transforms the way we work and live, just as the steam engine, electricity and digitalization did previously.Footnote 69

A future in which human life is closely intertwined with AI is also a popular theme for the film industry. In parallel with the recent ‘AI spring’, movies such as Her (2013), Ex Machina (2014) and Transcendence (2014) depict a future where AI reaches a critical threshold of intellectual ability. As such, these films serve as a form of ‘scenario thinking’: they portray imagined situations in which humans have emotional relationships with AI and can even fall in love with it (Her), in which AI can pass the ultimate Turing test and become so like us that it is no longer possible to distinguish between human and machine (Ex Machina) or in which AI becomes a dangerous, barely controllable source of power (Transcendence). Although the idea of the superintelligent computer has long been a source of inspiration for screen writers, these recent productions have made the specific term ‘AI’ familiar to the general public.

3.2 Applied Research and the Run on Talent

Besides compelling screen depictions, what has mainly stimulated public interest in new AI technologies is their practical potential. That has also made AI economically attractive for the business community and governments. We have already described how private investment in AI has increased considerably all around the world. In addition, businesses and governments have teamed up with research institutes to set up special ‘AI labs’ where links are forged between fundamental science and practical requirements. In many parts of the world laboratories of this kind have been established to cater for particular sectors of the economy, ranging from agriculture and mobility to retail and manufacturing, healthcare and education to public administration. Others are addressing the societal aspects of AI; these are often known as ELSI (ethical, legal and social implications) or ELSA (ethical, legal and societal aspects) labs.

The first applied research facility in the Netherlands devoted to AI began life in 2015. That was the QUVA Deep Vision Lab, a joint initiative by the University of Amsterdam and Qualcomm dedicated to translating computer vision research into industrial applications. Similar projects proliferated in the years that followed, with the Innovation Center for Artificial Intelligence (ICAI) founded in 2018 by the University of Amsterdam and VU Amsterdam playing an important co-ordinating and supporting role. The Netherlands now has twenty ICAI labs, where companies including Bosch, TomTom, KPN, ING, Ahold-Delhaize and DSM, as well as hospitals, the national police and government bodies, collaborate with universities and research centres to develop innovative AI solutions.

With businesses also exploring AI’s potential in many different fields, and developing applications for them, an enormous demand has arisen for talent in this domain. That in turn has sparked debate in various countries as to how best to nurture and retain people with the necessary skills.Footnote 70 At the beginning of this chapter we pointed out that most AI-related PhD graduates in the US are now choosing careers in industry. Other countries are experiencing a similar ‘brain drain’ from academia to the business community, but in many cases with their trained specialists moving abroad to boot.Footnote 71 In Europe, prominent scientists from more than twenty countries have written an open letter on the subject to sound the alarm and call on policymakers to invest in the European research climate.Footnote 72

3.3 AI Action Plans

The focus on the potential of AI is also reflected in a proliferation of national and international AI strategies. Most such documents deal primarily with the economic opportunities, often within those sectors already important for the countries in question.Footnote 73 The OECD observes that the goal of national strategies is usually to boost national productivity and competitiveness by harnessing AI.Footnote 74 They are therefore concerned primarily with the development and utilization of AI through mechanisms like research funding, enhanced support infrastructures and encouraging business interest. For the same reason the development and retention of talent is an important feature of many strategies.Footnote 75

Although the main thrust of an AI strategy is typically the definition of an innovation agenda, many additionally address societal and ethical aspects. However, the passages devoted to these points are often subordinate to the economic plans and usually less substantive and action-oriented. In Europe the rationale for discrepancies of that kind tends to be that, in order to align AI with our values, we need to be in the technological vanguard.

The Dutch think tank DenkWerk produced a report entitled AI in Nederland (AI in the Netherlands) in 2018. This stressed the urgent need for the country to commit seriously to artificial intelligence, arguing that it was being left behind in terms of investment in the private sector and other forms of government support. DenkWerk pointed to the ‘enormous societal potential’ of AI and called on the government to formulate a national agenda for its development and application. The report urged immediate action, saying, “This is not a matter that should first be considered for two years”.Footnote 76 That same year DenkWerk helped to initiate work on a national AI agenda. AiNed, a coalition of corporate, academic and government partners, then published a report substantiating the earlier call for urgent action. That argued that AI should be made a national priority to protect and enhance the nation’s prosperity and international status. With a view to accelerating the development of AI and differentiating the Netherlands on the global stage, AiNed formulated several objectives as the basis for a national strategy.

The strategy was eventually published in autumn 2019. Following the release of the AiNed report, a task force was formed. Led by employers’ confederation VNO-NCW, this also involved the Ministry of Economic Affairs and Climate Policy, which assumed responsibility for realizing the objectives formulated. The first major practical step was to create a Dutch AI Coalition, a platform for collaboration between businesses, government bodies, non-governmental organizations and research institutes to catalyse AI development. The coalition quickly announced its intention to promote the formation of AI labs and to work with the government to develop an AI strategy. Another outcome is the Strategic Action Plan for AI (SAPAI), presented by the Ministry of Economic Affairs and Climate Policy with the support of the ministries of Justice and Security, the Interior and Kingdom Relations, Social Affairs and Employment and Education, Culture and Science.

In this plan the government sets out a pathway for the period ahead and describes the first practical initiatives to accelerate AI development and raise the Netherlands’ profile in this field. The SAPAI defines a three-track policy. Track 1 relates to utilization of the societal and economic opportunities offered by AI, track 2 to creation of a conducive ecosystem and track 3 to safeguards. Inclusion of the third track reflects how the debate has shifted, with attention focusing not only on the economic opportunities afforded by AI but also increasingly on the impact of its applications. The SAPAI was presented together with documents devoted to ‘AI, civic values and human rights’Footnote 77 and ‘Safeguards against risks associated with government data analyses’.Footnote 78

The private investment, the formation of AI labs and the launch of national AI strategies are indicative of the increased interest in AI outside the scientific community. Much of that focuses on the potential of AI. There is growing awareness that it has now reached a certain level of maturity and so the time has come for the appropriate actors to realize its potential. AI is on the agenda, particularly the economic agenda. Stories about new applications and the doors they can open appear regularly in the media. As a result, the general public has become aware of the technology. Many may not understand quite what AI is, but they know of its existence.

3.4 Interest in the Practical Effects of AI

As AI has become the focus of increased attention, questions about its impact have arisen. The technology’s introduction to the real world has led people to consider its implications for everyday life. In some recent books the emphasis has shifted from the revolutionary nature of AI to the possible consequences of its use in real life. Various authors have highlighted potential problems associated with its transition from the lab to the mainstream. Moral, societal, political, legal and economic issues have all been raised. AI’s effect on society and its core values has become a matter of public debate.

In her book Weapons of Math Destruction (2017), Cathy O’Neil warns of the harmful effects that the careless and short-sighted use of algorithms can have on people’s lives.Footnote 79 Meredith Broussard has a similar message, coining the term ‘technochauvinism’ to describe how mankind can be insidiously degraded by the idea that technology is capable of meeting every human need.Footnote 80 Shoshana Zuboff has also expressed concerns, but about the actors controlling the technology rather than the technology itself. She warns of ‘surveillance capitalism’, an economic philosophy based on excessive data gathering and use of predictive algorithms, allowing big tech companies to exercise unprecedented influence over our behaviour. In The Algorithmic Society (2020), various authors highlight the association between data, algorithms and power and describe how that association can distort the relationship between citizen and state. In his most recent work even Stuart Russell, author of the definitive AI handbook, expresses concern about AI’s effect on the real world. A system that works well in a technical sense may nonetheless have undesirable effects, Russell writes. He argues that this makes it important to keep AI permanently under control: “What’s worse than a society-destroying AI? A society-destroying AI that won’t switch off.”Footnote 81

Living with AI has become an important theme. In 2016 the World Economic Forum was devoted to designing a world of smart technologies such as AI. In the same year G7 IT ministers agreed with the OECD that international talks should be held regarding the development of AI and its economic and societal implications. The OECD has since been increasingly active in this field, organizing conferences and encouraging international policy discussions. In 2019 the organization presented its principles for AI, identifying the technology’s effect on people and society as an important theme and setting out a framework for responsible further development. All OECD member states, the G20 and ten other countries have endorsed the principles, which have thus become the first intergovernmental guidelines on AI.

3.5 Social Organizations Become Involved

Attention is shifting from AI’s economic impact to its societal impact. At the international level, UNESCO has also started taking an interest. In 2018 an entire edition of the organization’s magazine, The UNESCO Courier, was devoted to the opportunities and threats society associates with AI. In her contribution, Director-General Audrey Azoulay stressed the importance of an ethical debate on AI. Unsurprisingly, she saw a role here for UNESCO: “It is our responsibility … to enter this new era with our eyes wide open.”Footnote 82 Her organization is seeking to discharge that responsibility by working on a global ethical standard for AI, to serve as a basis for the development of national policy.Footnote 83 The European Commission has meanwhile established the High-Level Expert Group on AI (AI HLEG), a body charged with providing European governments with an ethical framework for the technology. Following the publication of the European strategy for the development of AI, responsible development in which the effects of AI are taken into account is now also on the agenda. The AI HLEG’s guidelines and recommendations for ‘trustworthy AI’ are intended to promote awareness among policymakers of the ethical and societal aspects of AI and to provide a framework for managing them.Footnote 84

Concepts like ‘human-centric’ AI and ethical, humane and responsible AI are being mentioned with increasing frequency, indicating growing interest in the relationship between AI applications and human society and values. That trend is also reflected in the proliferation of publications about AI and ethics.Footnote 85 Research institutes devoted specifically to the societal implications of AI have been established too. Even in Silicon Valley, the crucible of AI’s technological development, the Stanford Institute for Human-Centered AI has been set up specifically to investigate the technology’s human and societal impact.

The AI Now Institute, also in the US, is perhaps the most prominent example of a research centre concerned with the societal effects of AI. Its annual reports serve as important catalysts of worldwide debate. Since the first appeared in 2016, AI Now’s messaging has become increasingly clear: having initially called for research into the effects of AI, the institute has moved on to arguing that certain applications should be prohibited, sometimes at least provisionally, and to setting out specific requirements for the responsible use of the technology.

The changing tone of these recommendations illustrates how the public debate on AI has developed in recent years: from promoting awareness of its effects to a substantive discourse about how certain values can be impacted and protected. This trend is apparent in the Netherlands too. The Rathenau Institute – the Netherlands Organization for Technology Assessment – began by advancing the cause of public debate regarding the impact of digital technologies such as AI, but more recently has become an active contributor to the discussion on how that impact should be managed. As experience and research have made the practical effects of new technologies clearer, so firmer ideas have emerged as to how undesirable effects should be countered. The debate many commentators were advocating a few years ago is actually happening today and become increasingly substantive.

3.6 Sectoral Interest in AI

Growing recognition of AI’s practical potential has attracted attention from research centres and consultancies concerned primarily with non-technological fields. Having previously thought of AI as part of the general issue of digitalization, organizations active in such domains as education, healthcare, security, infrastructure and the law have in recent years turned their attention to the specific question of AI’s implications for their disciplines.

Since 2018, various sectoral bodies in the Netherlands have published studies and advisory reports addressing the significance of AI for their particular domains. The Advisory Council on International Affairs (Adviesraad Internationale Vraagstukken, AIV) and the Advisory Committee Public International Law (Commissie van Advies inzake Volkenrechtelijke Vraagstukken, CAVV) have reported on the military applications of AI,Footnote 86 while the Netherlands Environmental Assessment Agency (Planbureau voor de Leefomgeving, PBL) has considered what smart algorithms could mean for mobility,Footnote 87 the Netherlands Centre for Ethics and Health (Centrum voor Ethiek en Gezondheid, CEG)Footnote 88 and the Council for Public Health and Society (Raad voor Volksgezondheid en Samenleving, RVS)Footnote 89 have explored the implications of AI in the healthcare sector and Dialogic was commissioned by the Ministry of Education, Culture and Science to investigate AI’s impact on education.Footnote 90

These explorations and recommendations add texture to the AI-debate: the different contexts in which AI can be applied, demonstrate the breadth and diversity of its prospective impact. Meanwhile, more experience is being gathered with the deployment of AI and this reveals the difficulties and risks in the step towards its practical application. Examples of discrimination by algorithms, accidents with self-driving cars and the disappearance of the human dimension through excessive ‘algorithmization’ give food for thought for the ways in which we want to integrate AI into society.

3.7 The Dark Side of AI

Paralleling the growing sector-specific interest in AI, more attention has been paid to the dark side of the technology. Examples of algorithmic discrimination, accidents involving self-driving cars and dehumanization associated with excessive reliance on technology have prompted people to reflect on how they want AI integrated into society. In Europe, the EU’s Agency for Fundamental Rights (FRA) is investigating potential implications in its field. The role of AI in the development of autonomous weapons, the use of facial recognition by local authorities and police forces and the status of ‘big tech’ have all become topics of public concern. The potentially harmful side of AI is starting to dominate debate.

Moreover, the risks posed by AI and its dual-use potential, combined with the speed at which the technology is currently evolving, are adding a degree of urgency to the debate: if its increasing use is not to have undesirable consequences, not only do we need a clear picture of the risks but we also have to respond accordingly. Various US states and cities have now prohibited the use of facial recognition by the police and in public places.Footnote 91 The European Commission’s draft Artificial Intelligence Act seeks to do the same, except in special circumstances such as where compelling security considerations exist. Both the US and Europe have for some time been looking at possible ways to curb the burgeoning power of big tech corporations through competition law.

Campaign groups and lobby organizations are taking up AI-related causes as well. They include, firstly, groups dedicated to addressing problems of a particular kind – privacy or digitalization issues, for example – that have developed an interest in potential abuses associated with the use of AI. In Europe, EDRi (European Digital Rights) and other groups dedicated to protecting rights and freedoms in the digital environment are now concerning themselves with AI. Secondly, we are now seeing groups dedicated specifically to AI-related issues. They include AlgorithmWatch in Germany, which systematically surveys and critically evaluates the international use of algorithmic systems. Another development is that some major human-rights organizations, such as Amnesty International, Hivos, Human Rights Watch and UNICEF, have also started to take an interest.Footnote 92 In short, there is a growing movement within civil society concerned with the negative effects of AI.

3.8 On the Policy Agenda

Within government too, AI-related issues are commanding greater attention. In part as one aspect of the wider debate about digitalization and privacy, as recognized by the research institutes and advisory councils. In the early 2010s discussion of digitalization was dominated by questions relating to big data and privacy. At that time the EU was working on the General Data Protection Regulation (GDPR) with a view to providing a legal framework to enhance the protection of personal data, particularly in the digital domain.

The focus on privacy gave rise to interest in transparency as well, another principle prominent in the GDPR, and both figured in the political debate regarding AI from the outset. Alongside the more general debate regarding digitalization, the discourse around big data is gradually transforming into a discussion about how that is processed using ever more intelligent algorithms. In its advisory report Big Data in a Free and Secure Society, the WRR has already highlighted the crucial role that algorithms play in big-data processes.Footnote 93

Since 2018, however, the public debate regarding AI has broadened discernibly. At the European level that was the year in which the AI HLEG was set up. It went on to publish a set of Ethical Guidelines for Trustworthy AI (2019), which introduced such concepts as unfair bias, accountability and welfare to the discussion. Its effect has been to raise the profile of issues of discrimination and human control. The latter became an important principle in the European White Paper on AI (2020) and the subsequent draft Artificial Intelligence Act (2021), which is considered in more detail in Chap. 7.

To begin with, however, it is the immediate challenges associated with AI applications that command most attention. In response, efforts are being made to find practical means to address those challenges. One idea that is regularly floated is the establishment of an ‘AI authority’ or ‘algorithm watchdog’ to supervise the use of artificial intelligence. Other efforts to manage its direct effects include the development of standards for AI applications by organizations such as CEN-CENELEC at the European level and the ISO at the global level, as well as the ongoing legislative initiatives.

Meanwhile, a second broadening of the public debate is now discernible. There is interest not only in AI’s implications for public values in particular contexts, but also increasingly in its impact on society as a whole. The work of the Council of Europe’s Ad Hoc Committee on Artificial Intelligence (CAHAI) illustrates this trend; it takes a broad interest in AI’s relationship with human rights, democracy and the rule of law. The number of governments commissioning research into its societal impact and convening advisory committees to provide wide-ranging policy-support information is further evidence of a growing recognition that AI has potential implications for all aspects of society and therefore requires structural attention.

3.9 Ethics

As awareness of the effects of AI has grown, ethics have become an important feature of the debate in recent years. Governments and private-sector actors have developed ethical codes and guidelines on the responsible use of AI, and university technology programmes have been adding ethics modules to their curriculums.Footnote 94 In both technical and social studies, increasing interest in the wider relationship between AI and society has become evident.Footnote 95 Alongside their technology-based AI professorships, several Dutch universities have recently created chairs covering its societal and community aspects.

More systemic study of the implications of AI is coinciding with the emergence of a degree of ‘ethics fatigue’. Although in practice many things covered by that term have little to do with ethics, there is growing dissatisfaction with the plethora of codes and guidelines that AI is expected to comply with. These often fail to reflect the complexity of the field in practice and provide an inadequate framework to prevent abuses and undesirable developments. It seems that more structural safeguards are required to ensure that AI is aligned with our common values, and that is shifting attention beyond its actual applications to the broader dynamics of its integration into society.

3.10 Interest in the Societal Integration of AI

We are currently at a stage where there is widespread interest in AI as a multi-functional technology with great economic potential. It has also become clear that its use will have a transformative effect on established practices and could lead to undesirable situations. Until recently most attention focused on the short term and much of it on specific values. However, the scope of the debate has now broadened to encompass AI’s effects in a variety of domains and its impact on a wider range of values. Whereas the debate initially related mainly to matters of privacy, transparency and human control, there is now also interest in how AI affects other values, such as sustainability (see Box 3.2).

Box 3.2: AI and Sustainability

As AI enters everyday life, there is increasing interest in its impact on society. Issues concerning privacy, equal treatment, autonomy and security have become the focus of growing debate. Another pertinent subject – with a societal and political profile that has so far been quite low, but now attracting more and more attention from researchers – is the effect on sustainability.

There is an optimistic school of thought that AI can make a substantial contribution to enhancing sustainability. The UN’s annual AI for Good conference is devoted to topics like ecological objectives in relation to AI. Furthermore, we are now seeing numerous initiatives through which AI is indeed adding substantively to sustainability. The best known are projects to make more efficient use of energy and improve wind and solar energy forecasting. But AI is also being used for smart farming. Amsterdam-based start-up Connecterra uses Google algorithms in livestock husbandry, for example.Footnote 96 There have also been interesting initiatives in nature conservation. One is eBird’s use of machine learning algorithms in ornithology and utilization of the output data for bird protection. Another example is the use of AI by Global Fishing Watch for population monitoring. Finally, the EU’s Destination Earth initiative (DestinE), for which the use of AI is also envisaged, should not be overlooked.Footnote 97

On the other hand, there is a growing body of evidence that AI can have negative impacts on sustainability. The CO2 footprint of the global computing infrastructure is already greater than that of the aviation industry at its zenith. Running a single natural language processing algorithm is associated with emissions equivalent to 125 return flights between New York and Beijing.Footnote 98 Furthermore, AI is being used to maximize fossil energy production and to promote non-sustainable consumption.

Peter Dauvergne has claimed that for every example of AI having a positive impact on sustainability, there are multiple cases of negative impacts. He attributes that to the wider political economy and the power structures associated with the technology. As long as the landscape is characterized by a commercial logic focused on exploitation, AI will not have a positive net effect on sustainability.Footnote 99 Achieving its potential in this area, he argues, will require changes to power structures, to the actors using AI and to the purposes for which the technology is used.

The Dutch government’s request to the WRR for advice on AI, which led to this report, was signed by nine different ministers, indicating how its impact is relevant to all areas of policy and has the potential to affect all their core values. Interest in the effect of AI in particular contexts has effectively coalesced into an interest in its impact on society as a whole.

Since AI first appeared on the public agenda as a revolutionary technology, its effects and especially its risks have gradually become more important topics of debate. Now that it is being put to practical use in more and more spheres, and is set to find even wider applications in the future, interest in its impact is becoming more structural: as we develop AI, how can we safeguard the things that we value as a society, our civic values? To answer this question, we must look beyond AI’s immediate effects and consider the longer term. Safeguarding civic values depends not only on the robustness of our technical systems, but also on the structure of society itself.

At this point in the development of AI we face the challenge of determining what is needed to achieve the technology’s structural integration within society. Before considering the various aspects of that issue, it is important first to consider the role the lab will continue to play in the process.

Key Points – AI as a Phenomenon in Society

  • AI’s transition from lab to society has generated a societal dynamic.

  • At first that dynamic was characterized by interest in AI as a revolutionary technology. Initially, the primary focus was the economic opportunities.

  • As more practical experience has been gained, the potential negative consequences of using AI have become clearer. As a result, interest in the opportunities is increasingly accompanied by consideration of the risks, and a public and political debate has arisen.

  • The AI debate initially focused on specific values such as privacy, non-discrimination and transparency and on application of the technology in particular contexts. However, AI’s wide range of potential uses has broadened the debate to cover its impact on society as a whole and all the associated civic values.

4 The Future of the Lab

We have seen how AI has moved out of the lab and become a feature of society in many different respects, in the form of numerous applications and wide-ranging public debate. What implications do such developments have for the future of the lab? We should not expect that now AI has established itself within society, the lab’s significance or dynamism will decline – that would be a mistake. The transition from lab to society does not imply that AI is a perfected technology requiring no further development, or that from now on attention should focus solely on its applications.Footnote 100 Rather, the breadth of the lab-to-society transition implies that a wider range of issues related to AI’s societal integration will warrant attention, as we outline in the next chapter.

Despite all the activity in the application sphere, lab development remains vital for at least two reasons. The first is that, despite all the advances made in recent years and the innovations they have enabled, AI still has significant limitations. The current methods offer no answer to a variety of questions. People are already talking about the limits to the capabilities of deep learning, for example. While it is impossible to say whether this will lead to a third ‘AI winter’, it is clear that the technology still has a long way to go, and that significant progress will require further fundamental research. The second reason for the lab’s continuing importance relates to the particular nature of AI. It is a form of technology in whose application the lab must remain involved. Strictly speaking, then, AI has not left the lab but extended beyond it.

4.1 The Need for Fundamental Research

Various experts have made the point that access to more and better data is the key to overcoming many of the current limitations to machine learning. Furthermore, interesting developments are taking place in this particular field, like the use of generative adversarial networks (GANs) in which multiple algorithms are used to improve one another. One algorithm generates something new, such as an image of a bird, and in response the other algorithm indicates whether it recognizes it as a bird or not. If not, the first algorithm continues refining the image until the second one is ‘convinced’.

Ray Kurzweil believes that such simulation methods can resolve many of the problems associated with data shortages. For example, rather than self-driving cars having to learn in real traffic, with all the attendant dangers, they could travel millions of kilometres in simulated worlds without putting anyone at risk.Footnote 101 Similarly, defence robots could be trained within a simulation so that they are more advanced prior to deployment in the real world. Another promising approach is federated learning, where data to train a machine learning algorithm is not loaded onto a central server but algorithms are refined by adjusting their parameters with those from other datasets, without combining the actual data. This approach is particularly suited for use with privacy-sensitive data such as hospital records.

Despite such developments, scientists believe that innovation remains necessary, partly because machine learning appears to have inherent limitations. For example, progress is needed in the field of computer vision if it is ultimately to be used for autonomous vehicles or security applications. Now its algorithms are relatively easy to fool, as demonstrated by experiments showing that tiny traffic signs too small for the human eye to see were treated like the real thing by self-driving cars. Current algorithms look for patterns, so if a minute sign closely matches the pattern sought then it will be interpreted with confidence as a real one. Its abnormal dimensions are not noticed by the existing algorithms.

This makes such algorithms vulnerable to adversarial attacks – with potentially disastrous consequences in the case of a self-driving car. A recent study demonstrated that changing a single pixel in an image can confuse an AI algorithm. The military use of AI is another field where vulnerabilities like these can have serious repercussions; it has been shown, for instance, that an image classifier can be fooled into identifying a machine gun as a helicopter.Footnote 102 The same attack strategy could be deployed for other purposes too. Google uses an algorithm to classify videos for the protection of intellectual property rights and so on. Researchers at the University of Washington showed that this could be tricked by inserting random images into a video for fractions of a second.Footnote 103 In an incident in the US, a police officer who was being filmed started playing music, presumably in the belief that YouTube’s algorithms would prevent the video being shared on intellectual property rights grounds.Footnote 104

4.2 Superficial and Inefficient

Numerous other shortcomings with machine learning illustrate that a great deal of lab work is still required, as AI pioneers have themselves acknowledged. Yoshua Bengio has argued that deep neural networks can learn superficial statistical regularities from datasets, but not higher abstract concepts. They therefore lack the type of understanding needed for certain tasks and forms of communication. Geoffrey Hinton and Demis Hassabis, founder of DeepMind, have both stated that general artificial intelligence is currently nowhere near being a workable reality.Footnote 105

Pioneer Hinton is critical of current methods and has highlighted various shortcomings.Footnote 106 One is inefficiency. Machine learning is more like human learning than earlier technologies were. For example, images are recognized by identifying patterns rather than by following fixed rules. In that respect the machine and human learning processes are alike. Nevertheless, major differences also exist, and humans are still able to learn far more efficiently. A small child only needs to see a few apples to acquire the ability to recognize apples in the future. By contrast, machine learning algorithms need to be shown thousands of images of apples before they are trained to identify the fruit. Furthermore, while the volume of data available globally to train algorithms is increasing, the situation varies from domain to domain; many still have a shortage. Another problem is that, for certain applications, high error rates during the training phase entail serious dangers.

4.3 Common Sense

A related problem is AI’s lack of common sense. The earlier example of the tiny road signs perfectly illustrates this. Current algorithms are designed to be capable of processing all possible images and therefore cannot distinguish between plausible and implausible contexts. Although they recognize patterns, they are not good at ascribing significance to them. CAPTCHAs – completely automated public Turing tests to tell computers and humans apart – are a good example. They are the tests you come across on the internet that ask you to prove you are not a robot by, for example, selecting all the photos in a group that include trees. Passing requires common sense. Even if a picture includes only a small part of a tree, a human can usually see what it is by drawing conclusions from the surrounding objects, such as bushes.

For an algorithm, however, the limited number of data points available as a basis for recognition is usually problematic. CAPTCHAs therefore reveal the current limitations of machine learning in situations that call for common sense. Machine learning algorithms have no access to the collective knowledge acquired elsewhere by other programs. They therefore have trouble answering questions that humans can answer without hesitation, like “Who is taller, Prince William or his young son Prince George?” or “If you stick a pin into a carrot, will you make a hole in the pin or the carrot?”Footnote 107

Humans answer such questions by drawing on a large pool of implicit knowledge. When we speak, we do not provide all the relevant information because we assume that the listener will make deductions based on the context. If someone instructs a taxi driver to take them to the airport as quickly as possible, the driver knows that they are not expected to drive without regard for the rules of the road or the safety of other road users. An algorithm, however, lacks that kind of implicit background knowledge. In other words, the language is underspecified, and no fact exists in isolation.Footnote 108 Stuart Russell cites the example of progress in the field of physics. By analysing data from telescopes, an algorithm can develop new knowledge. However, progress depends on more than merely studying additional data. The formulation of hypotheses and the selection of factors for inclusion from the universal data pool rely on prior knowledge of physics, which does not exist in a form an algorithm can process.Footnote 109

4.4 Lack of Transparency

Another shortcoming is that current machine learning algorithms lack transparency, which often makes it extremely difficult to ascertain how they come to a given conclusion. In many cases the decision-making process can be uncovered, but that does not necessarily imply that it is explainable: knowledge is not the same as understanding. A decision to classify something based on a pixel-level detail is unfathomable for humans, for example. In many cases this is not a great problem. If the algorithm’s decision relates to something like a security risk, a mortgage application or a medical diagnosis, however, opacity has serious implications. In such contexts, explainability is therefore a requirement.

Some experts believe that the complexity of the algorithms needed for applications of this kind does not present an insurmountable problem. Hassabis, for example, takes the view that we are currently building the systems and that the construction phase will be followed by a process of reverse engineering aimed at understanding how they actually work. He therefore believes that, within a decade, most systems will no longer be black boxes.Footnote 110 Yann LeCun suggests that it is as if we are still in the process of inventing the internal combustion engine but are already worrying about brakes and seatbelts. Such problems can be addressed at a later stage, he argues.Footnote 111

Other experts see poor explainability as inherent to the technology, making a different approach necessary. According to Judea Pearl, human knowledge is expanded not by a blind process but by building and testing models of reality. Current machine learning approaches are limited because they focus on correlation, not causality.Footnote 112 He draws an analogy with the difference between Babylonian and Greek astronomy. While the Babylonians were able to make very accurate predictions, better than the later Greeks, the process they used was unreproducible – a black box – and the mechanisms underpinning the predictions were not understood. The Greek approach was based on understanding those mechanisms, and the emphasis on causality proved central to the subsequent development of science. Pearl thus regards the current non-model-based approach to machine learning as inadequate.Footnote 113

4.5 Old and New Approaches

With a view to addressing shortcomings of this kind, alternative approaches are now being developed. Several build on ‘good old-fashioned AI’, the symbolic technology with which the discipline began. Such rule-based systems are used, for example, where the amount of available data is limited. Siemens has a system built on that principle to control gas turbine processes in its factories. Without predefined rules the turbines would have to run for a century to train an algorithm to do the job effectively by means of machine learning.Footnote 114 It is also difficult to apply machine learning in situations where that would imply using large volumes of privacy-sensitive data and generating results with an opaque basis. In such cases, top-down logical systems may offer a solution. Another, related suggestion is to use rule-based systems in combination with machine learning to predict outcomes and thus deduce what rules are being followed, so making the results more transparent. People like Yann LeCun and Nick Bostrom believe that the future lies in adding structure and modelling to existing machine learning techniques.Footnote 115

In a variation on the hybrid approach, efforts are being made to code common sense into algorithms. For example, DARPA has a Machine Common Sense programme. This is creating models that distinguish between various categories, such as objects, locations and actors, as happens in human cognition. There are also related approaches that involve building certain principles into algorithms so that they do not have to learn everything from scratch; these work like the inductive biases that influence learning in children. At an early age children learn the basic physics of objects, how they move through space and, for example, that they cannot pass through each other. Principles of that kind guide and accelerate the learning process so that a child does not have to see thousands of examples of something before it can recognize the item in question. One approach that works that way is the graph network, in which objects are represented by circles and relationships by lines.Footnote 116 Geoffrey Hinton is working on ‘thought vectors’ to better capture the meaning of language,Footnote 117 while the Allen Institute for AI’s Project Mosaic is endeavouring to program common sense into computers.

Another set of approaches has close links to neuroscience. Just as neural networks were inspired by the workings of the brain, so there are now initiatives to create neuromorphic chips. If successful, future computers could be fitted with chips modelled on the workings of neurons. The European Union’s Human Brain Project is aimed at building a brain made up of computers,Footnote 118 as is the BRAIN Initiative in the US.Footnote 119

Besides the two familiar approaches of symbolic AI and artificial neural networks, Margaret Boden distinguishes another three. They are evolutionary programming, cellular automata and dynamical systems.Footnote 120 In his quest to find the ultimate algorithm, Pedro Domingos has identified three new techniques that can contribute to the search alongside the symbolic and neural approaches. Genetic programming is an approach used in the design of electronics and the optimization of factories.Footnote 121 There are also Bayesian methods, such as naive Bayes classifiers and hidden Markov models, which are used for spam filters, speech recognition systems, cleaning up data series and so on.Footnote 122 Finally, there are analogy-based systems like the nearest-neighbour algorithm used by support vector machines. The analogy-based approach has been used for modelling the solar system and atoms, and to produce music in the style of particular composers.Footnote 123

Although machine learning has taken off in recent years and has rapidly been adopted in a wide variety of domains, it has its limitations and alternatives are being investigated. Each of these has its own strengths and weaknesses, meaning that the most suitable approach differs from one application to another. It may be that in the future the emphasis will be placed on selecting the right approach for each application, with no particular one regarded as universally preferable. Many experts see hybrid approaches as the future, arguing that human intelligence works in a similar way. For example, the unconscious recognition of familiar patterns is attributable to neural networks whereas unfamiliar situations are addressed using conscious reasoning, which is more akin to symbolic AI. From all this we can safely conclude that AI is far from perfected and so fundamental research is going to remain very important.

4.6 The Lab Belongs with AI

As indicated earlier, the second reason why the lab will continue to play an important role relates to the nature of AI itself and of digital technology more generally. Whereas the traditional pattern is for something to be invented in a lab, then developed at a factory into a finished product for sale to the customer, digital products are characterized by a different dynamic. It is normal for their developers to remain involved in their application. Consider, for example, the difference between traditional television and a streaming service like Netflix. In the former a broadcaster airs a programme and then receives feedback from viewers that can be used to create new, improved output. With a streaming service the user remains connected to the provider’s platform, and it learns from their behaviour in real time, enabling immediate adaptation. A digital product can therefore be regarded not as a finished product but as a semi-finished one. Streaming platforms, smart thermostats, healthcare apps and all other digital products are continuously adapted and improved in use.

This is reflected in the structure of the technology industry. A ‘lean start-up’ will quickly develop a ‘minimal viable product’ (MVP). In many cases this does not work very well and is marred by numerous ‘bugs’, but is at least useable. Once in use it can learn and improve. Because of this ‘the lab’ in a sense remain present within the product and continues to play a major role in its further refinement. This is why collaborative initiatives like the AI labs referred to earlier, where scientists (‘the lab’) are in contact with businesses and/or government agencies, are common in the industry. We could even go so far as to say that with AI’s transition from lab to society, the lab itself has entered mainstream society. Another way of looking at it is that society has been absorbed by the lab to become a ‘living lab’. Facebook, for example, develops new services by continually running experiments involving the platform’s users. It should be acknowledged, however, that this practice has sometimes proven highly controversial, as with Facebook’s experiments aimed at influencing users’ emotions.Footnote 124

The particular dynamics of digital product development gives rise to a range of issues, which we consider in later chapters. One is the ‘technical debt’ problem, whereby it can be difficult to rectify a shortcoming in an MVP.Footnote 125 Furthermore, the development process is liable to entail a variety of risks. Because rollouts are not initially developed to end-product status, users may be exposed to something with undesirable or even harmful effects. By the time those are detected, the damage has already been done.

The semi-finished nature of AI products also presents challenges for regulators. For example, vehicles are normally tested by the responsible authorities before they can be used on public roads. This approach is workable in a world where the vehicles are end products, but not when they are semi-finished and liable to change once in use – as with a Tesla that receives a software update. The possibility of a continuous testing regime is therefore being investigated. In the healthcare sector too, the functionality and safety of devices has traditionally been tested prior to licensing on the assumption that their basic functions will not subsequently change. An AI product is constantly evolving, however, and changes can be implemented remotely. Such ‘lab dynamics’ therefore require a more dynamic approach to testing. We return to this topic in Part 2. For now, it is important to recognize how the lab remains involved with and indistinguishable from an AI product after its practical rollout.

The lab is thus slated to continue to play a major role in the future of AI. The limitations of the current approaches are such that further fundamental research is required, while the very nature of AI implies that the lab will always be associated with the technology’s practical application. In the interests of further technical advances and AI’s successful integration within society, it is therefore important to keep sight of the lab’s role, to involve it in practical implementation and to ensure that it has adequate resources and talent.

Key Points – The Future of the Lab

  • Although AI is now making the transition to society, the lab remains as relevant as ever. There are two main reasons for this.

  • First, current AI methodologies have a variety of shortcomings. They are superficial, inefficient, lacking common sense and opaque. Fundamental research therefore remains very important to address drawbacks of this kind.

  • Second, as with digital technology in general, lab research continues even after an AI product enters practical use. So in fact the lab itself enters society together with the product.