1 Introduction

Artificial intelligence (AI) is one of the hot, perhaps the most popular, topics in contemporary social scientific technology research (Yigitcanlar and Cugurullo 2020). As Paschen et al. (2020) described AI broadly is a set of digitalised (computational) agents those can observe and combine information and are also able to act upon that information successfully. In practice, AI is a collective term that includes data capture, analysis, and sharing technologies such as Internet of things (IoT), neural networks, machine learning, and robotics (Zhang et al. 2019; Yigitcanlar et al. 2020a; Alter 2021; Barns 2021).

The first steps in computer system integration and industrial robotics emerged in the 1960s. Their goal was to substitute human labour in the production lines (essentially in automobile industry). Since then, industrial robotics and assembly lines are producing numerous high-end consumer products ranging from electric vehicles to smart home appliances and other applications. Today, also governmental agencies and offices are boosting their operations with AI applications (Kaplan and Haenlein 2020). However, there is an undeniable knowledge gap between public perceptions of AI and what implications does it have on public life (Cui and Wu 2021; Selwyn and Gallo Cordoba 2021).

An easy argument in the current AI research (and utilisation) is highly limited public attitude (perceptions) and the limited role of citizens in technology studies in general (Gao et al. 2020; Regona et al. 2022). Nonetheless, the research is limited as can be verified by looking at ScienceDirect and Scopus databases. A simple Scopus search provided only 126 (as of 29 September 2021) papers with the joint keywords “AI and citizens” in the field of social sciences. The use of alternative phrases such as “AI and perceptions” or “AI and public opinion” provided even lesser number of matches. The obtained topics verify that the majority of the current social scientific AI research concerns applications associated with public administration or private business development and customer intelligence. The field is also new. For instance, 87% (n = 110) of the available papers (in the social science category) were published after 2015. Time bias is probably due to the extensiveness and coverage of Scopus that limits the availability of older research papers, but it also verifies the topicality of the AI as a raising trend in social scientific research.

One of the challenges for the public to form a clear opinion on AI technologies is that these technologies are an invisible element of daily life mostly driven by proprietary algorithms. AI technologies are difficult for most people to understand and hence hard for the public to form a clear and accurate opinion on their opportunity and constraints. There is also a major problem in AI research that fails to enlighten the public. To elaborate further, Johnson and Verdicchio (2017, p. 575) see the route of the problem of this limited public understanding of AI as “a confusion about the notion of ‘autonomy’ that induces people to attribute to machines something comparable to human autonomy, and a ‘sociotechnical blindness’ that hides the essential role played by humans at every stage of the design and deployment of an AI system”. According to Crawford and Calo (2016), not engaging the public in the AI research is a blind spot. Hence, AI companies and researchers need to work with the public to advance public understanding and develop some shared standards.

There is strong evidence that AI is becoming rapidly and fundamentally embedded into the modern life. Today, AI applications are being used almost in all sectors in increasing levels. Some examples include, but are not limited to retail, manufacturing, construction, transport, marketing, banking, finance, medicine, mining, engineering, energy sectors, and government services. The main factor that motivates these sectors in AI adoption at large is creating efficiencies through reducing human error, taking physical risks from humans, offering 24 × 7 availability, helping in repetitive jobs, providing digital assistance, making faster decisions, and so on Kabalisa and Altmann, (2021).

Common AI applications, however, reinforce the pervasiveness of the technology, but they are invisible to the public. For instance, there are numerous digital (including traditional and social media platforms) services and applications such as Amazon, Netflix, Facebook, Instagram, and Twitter that plan and execute marketing actions based on their client behaviour big data—where in most cases, the users are not aware of AI algorithms constantly monitoring their behaviours.

The current analytical algorithms are self-learning, thus exerting fundamental principles of AI (Yigitcanlar et al. 2020b). Even they may be far away from science fiction depicted human-like forms of intelligence, and they possess several characteristics that are common in work aiming to iterate essential out of the bulk (e.g. in data processing). Customer (as service user or citizen) profiling tends to direct behaviour towards the desired outcome motivated by the algorithm design.

On the one hand, the current condition of citizen (or end-user) perception and knowledge regarding elements of AI is topical. These issues include for example general awareness of the condition of on data privacy, data management protocols and security, IoT integrations, and interchangeable data sharing (Sutherland 2008). On the other hand, a challenge for the analysis of citizen perceptions is familiarity with the AI technology and its implications.

Survey methods have a long and extensive research tradition in social sciences (Evan and Miller 1969). The golden years for survey research were on the latter part of the twentieth century, and today, conduction of rigid and extensive surveys is both expensive and difficult. There are several reasons including increased sense of privacy, data management requirements, address acquisition, and respondents’ lack of time to answer extensive questionnaires. Keeping this in mind, the paper is based on rigidly collected (extensive) survey data asking questions (perceptions) on highly up-to-date AI. Our approach provides novelty that is reliable, repeatable, and rigid.

The structure of the paper is as follows. Following introduction section, we move to wrap up essential literature analysing the topic of citizen perceptions on technology, including AI. The literature review stresses the importance of diverse AI understanding as it cannot be properly discussed without data analytics, data warehousing, IoT, and other fundamental technologies enabling information economy and society. After the literature background, we move to empirics and present new evidence from Australia. The paper concludes with discussion on the findings and future research directions.

2 Literature background

Earlier research has identified the need of multidisciplinary collaborations to assess the role and importance (impacts) of AI in society (e.g. Boyd and Wilson 2017; Theodorou and Dignum 2020; Dahlin 2021; De Neufville and Baum 2021). The basic argument follows the line of thought that AI concerns mainly technological development driven by engineering domains of science, whereas social sciences are focusing on social impacts and uneven developments caused by technological progression. These include, for example to name a few, issues of human rights (Chatterjee and Sreenivasulu, 2022), public health (Masys et al. 2021), general issues of citizen well-being and the future of technologies (Cortes et al. 2021), responsible and sustainable future (Yigitcanlar et al. 2021a), and ethics (Russel et al. 2015). These topics are in the heart of the survey prepared for the study at hand, and we detail these questions in Appendix A (Parts B, C, D, F).

Social scientific research has established that technological progression is, inherently, linked to social condition and acceptance. One of the main societal impacts from AI derives from continuously improving data processing capability or efficiency (Kitchin 2014). AI contributes directly to these evolving decision support systems targeted to improve efficiency and improving worst-case outcomes (e.g. the case of human–machine decision problematic, see Yigitcanlar et al. 2020c; March 2021).

Open and closed innovation mindsets are identifiable in AI systems, as grassroot level (bottom-up, diversified) development tends to favour the former and business-driven (top-down, kernel/silo) the latter (Lewowski and Madeyski 2022; Stahl 2022). From the end-user (citizen) perspective, the provision form is, perhaps, not the most interesting one, but it entails the traditional logics of dichotomies. Thus, AI presence can be achieved with open or closed systems that are produced either publicly or privately, and they can be non-profit or profit-seeking ventures.

AI algorithms evolve constantly. Their main use concerning citizens (or customers) is in data integration and association. This raises questions of data integration (i.e. database combining). In the case of the public sector (Wirtz et al. 2018; Reis et al. 2019), combination of different administrative databases enables highly accurate estimation platform for behaviour probabilities. In several countries, registry and data legislation forbids automated public registry data integration, such as cross-referencing health records, income/taxation records, social fare records, or criminal, legal, and court records, just to name a few examples. These contribute to the trust and privacy issues towards governance and public sector in general.

Machine learning and autonomous decision-making are fundamental goals of AI. The golden aim is justified (data based) decision providing the best solution for each problem. AI should provide efficiency, thus providing better service with lesser cost for both, to the customer and to the service provider. As such, AI-based public service development requires collaboration. This entails critical implications of provision philosophy as numerous public sector services are “in-house” productions or are ad hoc tailored to meet the specific requirements of that public domain. Generally, public services are considered such that they should be independent from a single service provider (i.e. private company with an exclusive contract) that might create liability risks in disagreements (Agarwal 2018). This topical issue concerns risk, trust, and reliability of AI (Appendix A, Parts E, G).

AI implementations have numerous open questions and drawbacks. Cortes et al. (2021) point out a concern regarding “low-quality” data that could have negative impacts on governance quality. Data quality issues and AI algorithms capability to cope with bad quality data will be one of the most important steps in creating an entity (technological) capable of decision-making that has impacts on citizens. Another significant issue concerning city governments (or other regional or territorial legal entities) is that whether they implement the same data management structures horizontally (convergence) or use whatever platform they see fit for their needs (divergence).

The relationship between technology and society dates to the beginning of human civilisation (Yigitcanlar 2016). The survival of humankind depended and still depends on technology, whether it was mechanic in nature before the digital revolution, and digital since then (Allam 2019). While the hazards of misuse of technology, such as nuclear and AI technology in warfare or big brother AI practice of autocratic governments, are acknowledged, Latour (1990) referred to technology as a vehicle that “makes society durable”. This is true in many ways such as green/renewable technologies as a contributor to the sustainable development and climate action efforts (Mendes 2022). Along with this technology, adoption of the public and societal change has been an important research area during the last several decades (Koul and Eydgahi 2018).

Today’s societies produce enormous amounts of data (Ghani et al. 2019). The ever-increasing data enables AI to hold a promise of better services and new business opportunities. It has a significant role for both private and public sectors as they both create new services to meet the evolving demands of the digitalised society (Appendix A, Part H) (Makridakis 2017; Wang et al. 2021). Considering daily living and survey respondents most likely contacts with AI comes from urban technologies (Batty 2018; Cugurullo 2020; Yigitcanlar et al. 2021b). Perhaps the most convenient field includes traffic management and modelling. There are impeccable motivators for AI use—traffic volumes and flows are easy to model due to real-time traffic data. These are practical examples of making everyday life easier (e.g. less time-consuming urban transport), thus creating a positive demand of AI services (Yigitcanlar et al., 2022a). We assume that this has an impact on the survey perception descriptive statistics as well as analytic models.

In the case of emergencies and disasters, AI supported control systems providing more flexibility and modular response systems such as to cope with congestion or accident emergences. Real-time (agile) services are not the only potential application fields for AI (Abduljabbar et al. 2019). As an example, police, fire departments, and search and rescue units are traditional public (emergency) services. AI applications in these departments require highly developed legislation regarding the responsibilities between (possibly numerous) system providers, system management, and personal responsibilities. Coping with emergencies and disasters entail an essential aspect for the future AI (Appendix A, Part I).

Based on the above considerations, we formulate our empirical task to concern complex issue of social construction of AI. Similarly, Inkinen et al. (2018) conducted a reference study on e-service adoption that highlighted the importance of education and age as the main explanative variables determining attitudes and beliefs in e-service use and adaption. Our empirics apply them as well (Appendix A, Part A). In content questions, we emphasise attitudes and views of services as well as public view of trust and privacy towards AI. The data will provide a timely awareness check on the concerns and benefits perceived by the Australian respondents in the largest cities. They include private and public services and their acceptance drivers.

3 Methodology

We conducted an online survey to collect data from the public to capture their perceptions on AI. The target areas are selected as the largest capital cities of Australia, namely Sydney, Melbourne, and Brisbane. Individuals living within the areas up to 50 km crow fly distance from CBD of these three cities are targeted to capture responses. The only criterion to qualify for participation to the survey was being 18 years old or over. A professional survey panel recruiting company was hired to recruit participants. The mentioned survey panel company sent the online survey link to 2193 individuals via email at the beginning of May 2020, and 605 valid responses were received within the same month. This figure of 605 is well over the minimum sample size requirement of 385. The minimum sample size is calculated by using the Australian Bureau of Statistics sample size calculator—with 95% confidence level, 0.05 confidence interval, and for the population size of 5 million. This tool is accessed online at https://www.abs.gov.au/websitedbs/d3310114.nsf/home/sample+size+calculator.

Participants were provided with a definition of AI to avoid bias due to misinterpretation. A total of 605 valid responses received from the survey participants, resulting in a response rate of 27.6%. Most of the participants were female (50.2%), 35–44 years of age (20.8%), had a bachelor’s degree (31.9%), were employed (61.5%) mostly in the retail trade industry (9.1%), and were professionals (17.7%), with a gross weekly income of $1,000–1,999 (29.3%). We have also provided salient demographic characteristics of the three study areas to present the survey participants representativeness with the case study areas (Appendix B).

Despite our best efforts to have an ideal representation of the population characteristics of the three case cities in the survey, there occurred some representation differences between the study participant characteristics and the actual resident characteristics of the three case cities—e.g. in our survey 16.5% of the participants were 65 years old and over, where this figure for Australia is 22.9%. This mismatch issue is not an uncommon challenge of survey studies, and thus, along with the others, this limitation is mentioned at the conclusion section of the paper. Additionally, we note that Covid-19 may have an influence on the perceptions of the surveyed populations—circumstances may have made them interact more with technology—as the pandemic and lockdowns have triggered the pace of the digital transformation in our societies (Nagel 2020).

To measure public perceptions of AI about perceived benefits, risks, and trust, we developed constructs by following the lead of the academic literature on AI, public perceptions, and government decision automation (e.g. Fraszczyk and Mulley 2017; Araujo et al. 2020; Peng 2020; Stai et al. 2020; Cui and Wu 2021; Dennis et al. 2021; Kassens-Noor et al. 2021; Selwyn and Gallo Cordoba 2021). All items were measured on a 7-point Likert scale ranging from “disagree” to “agree” for perceived benefits and risks, and “distrust” to “trust” for the trust construct.

To categorise the identified attributes under fundamental dimensions and facilitate the interpretation of the data, we conducted a factor analysis using SPSS. Complementary to the factor analysis, we also conducted a descriptive analysis, where we calculated the level of agreement on investigated attributes. To calculate the level of agreement, “agree” categories were combined to calculate the percentage of agreement, and “disagree” categories were combined to calculate the percentage of disagreement. The percentage of the mid-point was used to report “neutral”. We ranked the investigated attributes on the percentage of agreement and disagreement.

4 Results

We conducted the factor analysis to reveal latent variables that cause the covariation between observed variables. To achieve a clean factor structure, we suppressed coefficients under 0.5. Table 1 shows that items loaded higher on their respective factors and cross-loading was not an issue. We calculated Cronbach’s alpha, which confirmed high reliability above 0.9 for each factor. Values above 0.7 indicate strong reliability. Further, we conducted a Kaiser–Meyer–Olkin (KMO) test, which revealed a value of 0.926. The KMO test explains whether all the included variables are adequate for the factor analysis. Values above 0.6 are considered to be acceptable.

Table 1 Results of the descriptive and factor analyses (combined dataset for Sydney, Melbourne, and Brisbane)

A descriptive analysis disclosed that most of the respondents were concerned about risks caused by AI technology. The respondents are mainly concerned about “AI machines being used to monitor your activity without your permission or knowledge” (ranked 1, agree), and about “AI machines being used to invade your privacy” (ranked 2, agree). This issue underlines the increasing concerns of consumers and related permission requests to access personal information (Degirmenci 2020). Respondents are least concerned about “AI machines becoming more intelligent than humans” (ranked 10, agree), and about “AI machines turning against and trying to destroy humanity” (ranked 11, agree). This could be due to personal experiences, which are already gained by early AI technologies such as smart home appliances; more advanced AI technology that could become a threat to humanity might be too distant of a scenario.

We collected qualitative responses from our participants, and one participant indicated: “AI is already here but at very early stages”. Other participants emphasised the security and privacy risks: “Extremely concerned about personal data security”, and “high priority is security and privacy concerns”. One of our participants indicated the importance of access to personal information through AI technologies to increase the effectiveness; therefore, we could identify that privacy issues with the diffusion of AI technologies is an important topic to help increasing the effectiveness of AI: “It would seem that the effectiveness of AI will depend on the public’s willingness to hand over their dataset”.

Although the respondents are concerned about risks caused by AI technology, most respondents nevertheless trust AI in their lifestyle. Our respondents mainly trust AI in the workplace (ranked 1, agree) and at home (ranked 2, agree). We found least trust regarding participants’ trust towards companies developing and commercialising AI (ranked 5, agree), and towards government agencies using AI (ranked 6, agree). One explanation we found in our qualitative data was that stronger regulations might help to increase the public perception of trust in AI. One participant argued that “it is a subject that needs regulation before implementing instead of chasing technology after it is introduced”. Another participant also emphasised regulatory aspects: “In some situations AI can be very beneficial, but I think there needs to be some quite strong regulation with it to ensure that society does not become too dependent upon it or that it replaces human relations and employment”.

In terms of AI benefits, factor analysis revealed two factors: (a) AI benefits for urban services and (b) AI benefits for disaster management. Regarding AI benefits for urban services, most of our participants found it beneficial that “AI can help local governments monitor and respond to problems associated with urban infrastructure” (ranked 1, agree), and “AI can help with efficient delivery of urban services” as well as “AI can help local governments monitor and respond to the environmental and climate crises” (both ranked 2 at the same percentage level, agree). Least beneficial were perceived to be that “AI can be used to reduce public sector costs with savings used to reduce rates and other taxes”, and “AI can help improve objectivity in the delivery of urban services” (both ranked 7, agree).

In terms of AI benefits for disaster management, most of our participants agreed that “AI can help in emergency services in disaster-related information gathering” (ranked 1, agree), and “AI can help in predicting disasters, and providing early warnings”. The level of agreement was least for the statement that “AI can help in social media analytics to obtain public perception on disasters” (ranked 9, agree), and “AI can be used in gaming applications to increase community disaster awareness” (ranked 10, agree). These findings are in line with the findings of a recent study on AI and disaster management (Kankanamge et al. 2021).

One participant explained the role of the public interest: “If public interest and ideas are taken to work in the best interest for the community that would work best”, and another participant emphasised the importance of control for AI to truly unfold benefits: “Introduction of AI must be controlled by responsible elected representatives. All AI activities must be monitored by an independent agency appointed by the people”. In terms of AI benefits for environmental improvements, one of our participants indicated the following: “I think having AI focus on the environment would be very smart. But to make sure they are pro-environment as opposed to pro-economy/development at any cost”. This statement reveals an important aspect of conflicting priorities regarding AI benefits, but for whom or for which purpose. While AI technology can provide pro-economic advantages, these might conflict with pro-environmental priorities, which is one promising field of emerging and future research (Truby 2020; Vinuesa et al. 2020).

To compare perception differences between Sydney, Melbourne, and Brisbane, we split the data and conducted further descriptive and factor analyses for the three cities (Appendix C, D, E). Participants from all three cities agree mostly that “AI machines being used to monitor your activity without your permission/knowledge” poses the highest risk (ranked 1, agree). While participants from Sydney and Brisbane agree that “AI machines being used to invade your privacy” is another high risk (ranked 2, agree), Melbourne participants agree that “AI machines being hacked and stealing/losing large amounts of your private data” comes second as a risk (ranked 2, agree). In both cases, we can see that participants from all three cities are mostly concerned with risks related to privacy issues.

In terms of AI trust, there is a notable difference between Sydney and Brisbane on the one hand, and Melbourne on the other hand. While Sydney and Brisbane participants trust mostly AI in the workplace (ranked 1, agree), Melbourne participants trust mostly companies developing and commercialising AI (ranked 1, agree). This is surprising because companies developing and commercialising AI are trusted the least by participants from Sydney (ranked 6, agree), and they are also less trusted by participants from Brisbane (ranked 4, agree).

Regarding AI benefits for urban services, participants from all three cities agree that it is most important that “AI can help local governments monitor and respond to problems associated with urban infrastructure” (ranked 1, agree). However, Brisbane participants perceive it equally important that “AI can be used to monitor urban areas and ensure safety and security of all residents” (ranked 1, agree). This is less important for Sydney participants (ranked 5, agree) and for Melbourne participants as well (ranked 4, agree).

Finally, a comparison revealed further insights for AI benefits for disaster management. While emergency services in disaster-related information gathering is the most important AI benefit for disaster management for participants from Melbourne and Brisbane (ranked 1, agree), Sydney participants perceive that it is more important that “AI can help in disaster response and emergency services in rescue operations” and that “AI can help in determining disaster damages and risky constructions and locations” (both ranked 1, agree). For Melbourne participants, it comes second that “AI can help in increasing effectiveness and efficiency of planning and preparedness for disasters” (ranked 2, agree), and for Brisbane participants, the next most important AI benefit for disaster management is that “AI can help in predicting disasters, and providing early warnings” (ranked 2, agree).

As a next step, we converted the scales from the factor analysis to composite variables. In terms of the factors that drive the public perceptions of AI, we conducted multiple regression analyses for AI risks, AI trust, AI benefits for urban services, and AI benefits for disaster management as dependent variables, and gender, age, education, employment, income, AI knowledge, and AI experience as potential drivers for the independent variables (see Table 2).

Table 2 Path coefficients and significance values of drivers impacting AI perceptions (combined dataset for Sydney, Melbourne, and Brisbane)

The analysis has found that for AI risks, AI knowledge is the most significant driver (β = 0.129, p < 0.01), while gender (β = − 0.105, p < 0.05) and age (β = 0.115, p < 0.05) are also significant. From this, we can conclude that people with more knowledge about AI will perceive AI to be riskier, while female and older individuals will be more prone to AI risks. Regarding AI trust, age is the most significant driver (β = − 0.228, p < 0.001), while gender (β = 0.084, p < 0.05) and AI experience (β = 0.115, p < 0.05) are also significant. We can conclude that with increasing age, AI trust will decrease, while male and more experienced individuals will put more trust in AI.

AI experience is also an important factor for AI benefits for urban services (β = 0.254, p < 0.001) and also for AI benefits for disaster management (β = − 0.136, p < 0.01), being the most significant driver for both dependent variables. While there are no other significant drivers for AI benefits for disaster management, age is also significant for AI benefits for urban services (β = 0.112, p < 0.05). It is surprising that AI experience has a negative effect on AI benefits for disaster management, but this might be since disaster management is a specific context of AI benefits, and the AI experience does not imply that individuals with higher AI experience have experience with AI for disaster management. In terms of AI benefits for urban services, we can conclude that with increasing AI experience, the perception of AI benefits for urban services will increase as well. Further, our results show that older individuals will perceive higher AI benefits for urban services.

Again, we split the data and conducted further multiple regression analyses for Sydney, Melbourne, and Brisbane separately (Appendix F, G, H). Most notable differences include that while gender (β = − 0.259, p < 0.001) and AI knowledge (β = 0.191, p < 0.05) are significant drivers for AI risks for Brisbane participants, these factors have no significant impact for Sydney and Melbourne participants. We can observe further differences in perceptions of AI trust. While gender has the strongest impact on AI trust for Sydney participants (β = 0.204, p < 0.01), age is the most important factor for Melbourne (β = − 0.219, p < 0.05) and Brisbane participants (β = − 0.236, p < 0.01). Further, income plays a significant role for AI trust for Melbourne participants (β = 0.205, p < 0.05). In terms of AI benefits for urban services, age (β = 0.281, p < 0.01) and AI knowledge (β = 0.271, p < 0.01) are the most significant drivers for Sydney participants, while for Melbourne participants they are AI experience (β = 0.339, p < 0.001) and age (β = 0.212, p < 0.05), and for Brisbane participants AI experience (β = 0.226, p < 0.01), employment (β = 0.227, p < 0.05), and age (β = − 0.088, p < 0.05). Finally, the results show differences regarding AI benefits for disaster management. While AI experience is a significant driver for Melbourne participants (β = − 0.184, p < 0.05), no significant drivers could be identified for Sydney and Brisbane participants.

In sum, while public perceptions vary depending on the local context, the analysis disclosed the key drivers behind the public perception as gender, age, AI knowledge, and experience.

5 Discussion and conclusion

The literature and media regularly report on the exponential developments of AI capabilities and brings our attention to the disruption it is causing and will continue to cause on the way businesses run, government, local services delivered, society function, and the like (Kile 2013; Makridakis 2017). Given the increasing AI disruption in many fronts of public life, government authorities need to pay an increasing attention to public opinions and sentiments towards AI (Cath et al. 2018; Gao et al. 2020; Lee et al. 2020). Furthermore, as stated by Zhai et al. (2020, 140), “public perceptions and concerns about AI are important because the success of any emergent technology depends in large part on public acceptance”.

Our study findings in the case of Australian cities contribute to the current limited knowledge pool on public perceptions on AI and shed light on the main drivers behind the public perceptions. The empirical analysis findings of the Australian cases revealed following insights.

First, the public is concerned of AI monitoring their activity without their permission and invading their privacy. Several studies have already reported the privacy issue as one of the main hurdles in technology adoption, including AI (Mazurek and Małagocka 2019; Radhakrishnan and Chattopadhyay 2020). While these concerns are valid, particularly when AI practices in some authoritarian regimes are concerned (Stark 2021), they also provide an opportunity to place pressure on public authorities and private sector for the development of more ethical and responsible AI frameworks and applications (Constantinescu et al. 2021; Yigitcanlar et al. 2021a). This finding is in line with other studies that highlighted the public anxiety about AI (Vu & Lim 2021).

Moreover, the increasing concerns around the data privacy issues, particularly during the last few years, are attempted to be addressed by the Australian Government through legislative measures. For example, the Privacy Legislation Amendment (enhancing online privacy and other measures) Bill 2021 reinforced the Privacy Act 1988 (https://www.ag.gov.au/rights-and-protections/privacy). Additionally, Australia’s AI Ethics Framework aimed to ensure AI is safe, secure, and reliable (https://www.industry.gov.au/data-and-publications/australias-artificial-intelligence-ethics-framework). Despite these and other legal frameworks (e.g. Australia’s AI Roadmap, Australia’s AI Action Plan, Data Governance Framework 2021), changes in public perceptions take time and require trust to be built. Hence, as much as legislative attempts, effective practice of the policy is required.

Second, the public is less concerned of AI becoming more intelligent than humans and turning against and destroy humanity. Despite catastrophic AI futures imagined for the humankind by science fiction literature and movies, and some entrepreneurs, e.g. Elon Musk, and scientists, e.g. Stephen Hawking, most of the society seems to develop a more optimistic view of “humans and AI might evolve together” just fine (Miller 2019).

Third, the public trusts AI in their lifestyle, mainly in their workplace and at home. Despite some privacy concerns raised, the convenience and efficiency AI offers make it a household technology for business and personal use. Studies on smart homes provided further insights on the underlining motivational factors for trust in and adoption of digital technologies, including AI. For example, a study by Li et al. (2021) identified the main motivations to adopt smart home technologies—including AI and IoT—as efficiency management, better services, cost savings and benefits, and enhanced quality of life.

Fourth, the public shows less trust towards companies developing and commercialising AI, and government agencies using AI. Lack of trust to AI companies may be linked to the social media company scandals, e.g. Facebook’s intervention in Brexit, US and Brazil presidential elections, and a genocide incited on Facebook in Myanmar (Isaak et al. 2018). Lack of trust to government AI applications may be due to a lack of fairness in using AI for public governance, the lack of transparency, unclear responsibility, and accountability and so on (Cui and Wu 2021). Recent failures due to AI use in government may impose negative impacts on governments and society (Zuiderwijk et al. 2021).

Fifth, the public sees value in what AI currently offers and potentially could bring in urban services and disaster management. Recently conducted studies, by Nili et al. (2022), Sanchez et al. (2022) and Yigitcanlar et al. (2022b), revealed insights into public sector experiences about deploying AI and offer a wide spectrum of AI adoption prospects for local governments as creating efficiencies, tackling complexity, managing repetitive tasks, processes, and decisions, automating routine decisions, minimising errors, and improving productivity that particularly benefit customer services, cybersecurity, policy and decision-making, environmental and development control, service and infrastructure management, urban planning, and performance review.

In terms of disaster management, AI is applied in crisis event detection, understanding public reaction, increasing disaster awareness, assess vulnerabilities, serious games/gamified applications, disaster-related data mining and knowledge management, big data analytics from the web or social web, situation recognition, understanding public reaction, eyewitness identification, crisis communication, launch disaster relief activities, i.e. drone-based disaster relief, human/life sign detecting drones, mental health chatbots, crowd evacuation, damage recognition, detecting socioeconomic recovery, understanding public reactions, damage assessments, and human loss estimation (Kankanamge et al. 2021). It seems to be the use of AI for greater good areas such as disaster management, is perceived less critical/negative than the use of AI for profit areas such as product marketing and sales (De Bruyn et al. 2020; Guha et al. 2021).

Sixth, the literature highlights the role of individual factors in AI perceptions. For instance, according to Vu & Lim (2021), “individual factors were strong in shaping public attitude towards AI” (p. 9). Our study identified these individual factors and disclosed that the main drivers behind the public perception of AI include gender, age, AI knowledge and AI experience differences of individuals. To elaborate this further, people with more AI knowledge perceive AI to be riskier, female, and older individuals are more prone to AI risks, with increasing age, AI trust decreases, and male and more experienced individuals put more trust in AI. These findings are in line with other studies that investigated AI and society in the context of Australia. For instance, according to a study by Selwyn and Gallo-Cordoba (2021, p.9), “males were twice as likely to describe themselves as ‘knowing a lot’ about AI rather than ‘never heard of it’ than females… Respondents over 35 years were almost twice as less likely to describe themselves as ‘knowing a lot’ about AI rather than ‘never heard of it’ than those in the 18–24 years age range”. In other words, in general younger males have a better understanding on and experience with AI, hence, they are more aware of the AI risks, but at the same time less vulnerable to AI threats due to the gained consciousness due to knowledge and experience.

Next, depending on the local context, even in the same country, public perceptions on AI may vary. Our study reported some variations in public perceptions among the largest three Australian cities. The variances could be even bigger in the international context. On that very point, Kelley et al. (2021, p.630) underlined that while “Australian public opinions regarding AI might be similar to those in US and Canada (i.e., both countries with similar Anglophone colonial cultures) … they are less congruent with contrasting cultures such as South Korea, Nigeria and India”. Similarly, in countries where AI is used by the state more authoritatively, such as China’s public surveillance with facial recognition practice, the public is more sceptical to AI (Cui and Wu 2021; Kostka et al. 2021).

While, in recent years, AI attracted massive interest of businesses, governments, and societies across the globe, there is still little known on the drivers behind the public perception of this technology (Lozano et al. 2021). This study shed some light on this understudied area in the case study context of Australian cities. The insights generated from this study inform authorities in developing policies to eradicate minimise the public concerns and maximise the public awareness and education on AI.

Last, when interpreting the findings of the study the following limitations should be noted: (a) while the sample size is adequate for the survey, having larger participant numbers might have surfaced additional perspectives; (b) the study focused on three major cities from Australia, while the statistically representation requirements are met, expanding the study to all geographies of the country might have provided extended insights; (c) there are some representation differences between the study participant characteristics and the actual resident characteristics of the three case cities, this might have some impact on the results; (d) the study findings are only quantitatively assessed, the open-ended questions’ answers are not factored in this paper, as these data will be analysed thematically and will be reported in another paper. However, the authors have read and checked all qualitative responses to make sure they are not contradictory to the findings reported in this paper; and (e) there might be unconscious bias in interpreting the study findings.

Finally, this study has only scratched the surface of gaining understanding on what the public thinks of AI and how these opinions were formed. Our future research, further building on the study at hand, will conduct empirical studies to better understand the levels of societal AI literacy—that is “a set of competencies that enables individuals to critically evaluate AI technologies; communicate and collaborate effectively with AI; and use AI as a tool online, at home, and in the workplace” (Long and Magerko 2020, p. 2)—and recommend policy directions to enhance AI literacy in Australia and overseas.