Designing lawful machine behaviour: Roboticists' legal concerns

Regulatory requirements weighing on roboticists are becoming heavier, and at the same time, the activity of producing robots is theorized as creating new types of legal risks. Roboti-cists become responsible not only for compliance to a set of regulatory objectives


Introduction
Both artificial intelligence (AI) and robotics can be defined as general purpose technologies that have applications in nearly all fields of human activity,2 suggesting that the kind of problems that those technologies create could very well blossom in many different industries or activities simultaneously.While the European Union is actively working on a regulatory package aimed at addressing some of these issues, 3 those texts need yet to be adopted.Predicting the importance of future and current regulatory issues, major actors in the tech industry have been calling for more precise regulation. 4Until clear and comprehensive regulatory guidance has been created, producers of AI and robotics alike may be left in a state of lack of clarity that drains on their ability to develop and deploy their products. 5We have data on policy makers' and scholars' legal concerns under the form of their respective publications, but we lack understanding of what roboticists consider legally concerning, which this article addresses.
There has been extensive engagement with AI and robotics by legal scholars.Some of the most influential pieces have analysed that the legal challenge with robotics is linked to their physicality, and their ability for emergent behaviour, 6 while others have argued that it is the salience of the activity performed by the robot that is the source of legal disruption. 7ther publications have been exploring general, conceptual, and theoretical legal implications of AI, 8 or on more specific issues such as liability,9 healthcare, 10 amongst many other angles of research. 11There have been many publications where researchers in AI and robotics have stated what they found problematic in general, 12 and at more specific ethical, 13 and legal levels. 14A few publications are also found from academics in other engineering fields about their legal concerns and understanding. 15Similarly, we know of legal practitioners' concerns, 16 and that of policy makers. 17To our knowledge, there is no empirical exploration of how commercial engineers understand the law and what they are concerned with.
This paper highlights differences in understanding of legal and technological concepts between roboticists and practicing lawyers.Engineers conceptualize legal concerns first and foremost as safety concerns, which are understood and balanced in a nexus that also contains functionality and economy of the product.Within that nexus, roboticists themselves have diverging understandings of the legal value of industrial standards, CE markings, and contractual disclaimers on the uses of their products.The paper also reveals and explains their lack of trust in AI.The hope is that the paper also helps lawyers, academics, and policy makers understand more precisely the way roboticists work and think, and by doing so facilitate communication between our professions.

Background
The technology that is behind robots is complex, modular, and varied.This is because a single robot can possess many different pieces of software, 18 hardware, 19 sensors, 20 and motors.Robots may possess AI modules and can be integrated in Internet of Things (IoT) networks.They have been used in industrial context for a long time now, most iconic in the automotive industry, 21 where they are usually fenced from human presence.The newest advancement is collaborative robots (cobots) that are meant to interact directly with human workers.Cobots include industrial robots and professional service robots.For instance like Lifescience-Robotics' ROBERT, a professional service physiotherapy assistant robot. 22Cobots nowadays are developed for many different sectors of industry, such as healthcare like the ROBERT robot, in industrial contexts like the Enabled Robotics' ER-Lite, 23 on construction sites like Kobots' Amigo, 24 or in agriculture like Farmdroid's Produktbladet. 25 Next to cobots, we can also see the development of social robots that possess many different functions, see for example SoftBank robotics' Pepper. 26This suggests that dependant on its development, robotics could become a general-purpose technology, 27 meaning that it will find applications in all fields of human activity and sensibly change human life. 28This heightens the necessity to understand the persons that build them.The complexity and modularity of the technology, as well as the breadth of application make for a very long and complex chain of value of actors behind products.The most important of these actors is naturally the producer of a specific robot, like Mobile industrial Robots (MiR) is for the MiR1000. 29iR cannot produce every single elements of their product, and thus buy hardware, sensors, and other components from 18 For example for analytics, control, safety, etc. 19 Hard drive, processors, Wi-Fi-connector, GPS, etc. 20 Laser, pressure sensors in a robot arm, etc. 21 Mariane Davids, 'A Brief History of Robots in Manufacturing' ( Robotiq , 17 July 2017) < blog.robotiq.com/a-brief-history-ofrobots-in-manufacturing>accessed 1 August 2022. 22< lifescience-robotics.com/meet-robert > accessed 01/10/2020. 23< enabled-robotics.com/robots> accessed 01/10/2020 24 < kobots.dk/product> accessed 01/10/2020. 25< farmdroid.dk/en/product> accessed 01/10/2020. 26< softbankrobotics.com/emea/en/pepper> accessed 01/10/ 2020. 27Erik Brynjolfsson, Daniel Rock and Chad Syverson, 'Artificial Intelligence and the Modern Productivity Paradox: A Clash of Expectations and Statistics' in Ajay Agrawal, Joshua Gans and Avi Goldfarb (eds), The economics of artificial intelligence: an agenda (The University of Chicago Press 2019) 39-41. 28see generally, Agrawal, Gans and Goldfarb (n 2) ch 1.This should not be confused with artificial general intelligence, the degree of machine intelligence that would make it similar in scope and depth with human intelligence Stuart J Russell and Peter Norvig, Artificial Intelligence: A Modern Approach (Third edition, Global edition, Pearson 2016). 29< mobile-industrial-robots.com/en/solutions/robots > their suppliers, which sometimes includes using open source software modules.Some robotics company specialize in building modules that complement other robots.This is the case for instance for Nord modules who build structures that fit the MiR robots in order to turn them into pallet carriers or other tools,30 what Nord modules thus sells is the physical modules and the software that needs to be integrated into a final solution in order for the MiR robot to function as a pallet carrier.
Enabled Robotics similarly develops software and physical modules that allow for a MiR robot to be fused together with Universal Robots' robotic arm (UR). 31Enabled Robotics thus only sells software and the physical structure that assembles the MiR and UR robot together, but this is still not a finished product as it does not possess any gripper or tool at the end of the robot arm.In fact, those tools are produced by yet other businesses, and assembled or 'integrated' together by the integrator company.Those companies specialize in tailoring solutions to the environment and industry of the prospective clients.They will set up the robot in the factory, program it, give it all the tools and grippers it needs to do what the client wishes. 32In essence, producers of robots like MiR, UR, and Enabled robotics try to convince integrators to use their products as solutions for the end-customers.A last noticeable actor in the chain is consultants.Those usually specialize in advising the end-customers, or the other actors of the chain and to help them ameliorate their products in terms of efficacy or safety. 33anish roboticists are an interesting group to study, because they are all concentrated in the same area of Denmark, have mostly gone through the same engineering schools, and have the same cultural background.This lets us concentrate more closely on their understanding of the law, without having to differentiate them individually.From a technology perspective, Denmark is a powerhouse for robotics and is likely to have an important role in shaping the future of robotics in Europe.Because Denmark is a rather small market with its 5.8 million population, businesses that start here do so with a mind to export, and this might explain why Danish robotics are so varied in applications and industries. 34he wider Danish public tends to be appreciative of the development of the technology, where the powerful Dansk Metal trade union for industry, telecom, and transportation openly encourages automation, as they see this as a means to secure production in Denmark rather than see it flee to other countries. 35In 2018, Denmark sported 240 robot per 10 000 employees in the manufacturing density, making it the sixth most densely roboticized economy in the world. 36Most of the industry is clustered in Odense, Fyn Island, with more than 300 companies and two of the largest cobot producers in the world (Universal Robots and Mobile industrial Robots). 37The industry is supported by the University of Southern Denmark, which has received substantial private and public funding for developing robotics. 38

State of the art
Engineers are an understudied and non-homogeneous group both at a national and a global level. 39Although engineers are comparable in numbers to scientists, they have not been as much a focus of sociological and anthropological study, 40 which is not necessarily surprising but serves to illustrate the lack of literature on that profession as compared to other scientific professions.They have been understudied from a sociological perspective, and even less so from a socio-legal perspective.What we do know is that engineers need to master many different competences at the same time, 41 roboticists are a good example of this as they need to master mechatronics and complex computing methods. 42Furthermore, engineers of different nationalities are very different, trained differently, valued differently by their respective societies.One constant amongst engineering professions is that there is a social expectation of engineers to develop and improve the performance of human constructs. 43Robotics could be a game changer for the profession's identity, as now they will need to develop and improve performance, but also improve behaviour by mechanizing what was once human behaviour and activities. 44ngineers are client-driven poly-technicians who implement science into society.Beyond the questions raised above about the professional identity of engineers, roboticists as a category can be justified by the commonality of their enterprise (building robots), if not by their profession or skills.While engineers are poly-technicians at heart, the objective of building a robot tends to require a greater level of polymastery than most previous engineering applications, because contemporary robots require mechatronics, AI, and IoT technologies. 45Nichols describes that they have a respon-  46 but to also re-evaluate engineering approaches in the light of economic feasibility and of the safety specificities of the projects at hand, which includes foreseeing misuse of their products rather than just the intended use.This suggests that engineers are continually client-driven, adapting to the newest case to solve.It is part of their practices to perform testing and prototyping in order to make many attempts at solving an issue. 47Efficiency in the end might be the one staple value of engineering. 48egal consciousness studies can be used to frame this research.This field consists of studying the law as experienced by people through their lives. 49Legal consciousness tends to study how laypersons in general experience grand law, courts, the notion of binding contracts in their everyday life. 50In this article, we looked more specifically at professional common place legality amongst roboticist engineers. 51We take a bottom up approach, and try to understand not only the knowledge and awareness of engineers, but also their behaviours that have an origin in law, where they act in reference to law without necessarily being aware of it. 52In a sense what we do is more regulatory consciousness rather than legal consciousness: it has little to do with ideals, 53 our research intends to measure knowledge, ignorance of the law, but it also goes further into understanding what place is given to law within professional practice.Since we do not study the relationship of engineers to the Law in general, but to the law and regulations that tie to their practice, our study does more than examine legal awareness, but less than legal consciousness. 54egal consciousness studies is used in this article to highlight our findings from an SLS theoretical angle, but it was not our intention to contribute to developing new knowledge in legal consciousness, only to provide foundational data on the relationship of roboticists to law, and to highlight the differences in perception between us and them.
AI has been a hot topic of legal academia for quite a time. 55Legal academic interest in AI and robotics was renewed with the start of conversations on autonomous weapons. 56 The two most influential pieces, 57 discussed the nature of the legal disruption of AI, where Calo attributed the legal difficulties to the intrinsic characteristics of the technology, whereas Balkin attributed those to the interaction between the machine, its environment, and the legal system. 58Since this conversation on the theory of contemporary AI law, there has been a plethora of publications in specialized fields of law and in continuation of that more foundational conversation. 59Similar publications have been made by legal practitioners, 60 and by policy makers. 61Calo (n 6); and the answer to that article: Balkin (n 7). 58 Our expectations of the data was shaped by the results of our theory research in the sociology of engineering, and contrasted with our knowledge of the AI and robolaw academic and policy debates.We expected that safety would be a standalone concern for engineers, and that part of this safety consideration would be linked to concerns for legal risks.We expected that this would in practice translate into thorough testing and assessment practices that would deal with risks and aim to reduce unforeseeability as much as possibleunforeseeability which engineers would be aware of and accept as part of the process-but that those testing practices would be unsuited to machine behaviour.We expected as well that they would be aware of the risks coming from integration of their product within complex chains of values, and would have technical and legal strategies to deal with those consequences.We finally expected that they would be able to clearly differentiate legal regulation and industrial standards, and would understand the legal value of industrial standards.

Method and data
Our sample currently comprises qualitative single interviews with ten male chief executive officers (CEO) or chief technical officers (CTO).All interviews had a duration of 45 to 60 min each.The interviews were performed digitally via Teams or Zoom so that both interviewer and informant could see each other, and recorded in a video format only we had access to.The data was collected June-August 2020, before the AI project regulation was published. 62The files were later transcribed and all names, place names and other sensitive information were anonymized.Interviews were conducted in English, however sometimes using Danish phrases if an English word was lacking, all original Danish quotes in the interviews were translated to English in the article.The format was structured around an interview guide, which contained 35 questions on various topics.Beyond basic questions on the nature, uses and origins of their products, we asked our informants about their concerns related to different issues.Those include: safety, legal implications of safety failures, types of safety failures; relations between members of the chain of value behind the product, and recourse procedures between them; their approach to European regulations like the Machine Directive ; 63 their approach to industrial standards, and the value they ascribe to those standards; we enquired about their uses of various AI methods (inductive and deductive methods), the types of risks AI creates, how they dealt with those types of risks, and whether those risks were different at a technical or legal level from those generated by previous technologies, about the types of risks that the growing complexity and inclusion spective in 2019 (Publications Office of the European Union 2020) < https://op.europa.eu/publication/manifestation_ identifier/ PUB _ KJNA30102ENN > accessed 21 August 2020. 62AI Act (2021/0106 (COD)). 63Machinery directive 2006 (2006/42/EC). of various different technologies and their inter-dependence created.
When relevant, we also enquired about their understanding and approach to ethics of technology, and the legal value they gave to those considerations.Our interview guide was designed to explore the daily professional engineering practices of legality, we wanted to ask engineers about their legal problems and needs, to see where their understanding intersects with ours and see where the lawyer's approach intersects with that of engineers.We coupled their understanding of legality to institutional law, and in retrospect this approach might have been a little bit intimidating to our informants.In that sense, our study goes contrary to what some legal consciousness scholars have been doing by trying to capture a picture of legality untainted by official legal settings and professional lawyers. 64s mentioned, our informants were almost exclusively chief executive officers (CEO) or chief technical officers (CTO).This choice was motivated by the fact that persons holding those positions, especially when they are also founders, were the ones most likely to have made design decisions in relation with regulation, industrial standards, or risks of safety failures.Since the robotics community of Odense is exclusively constituted of small to medium sized companies (the biggest one having less than 500 employees), CxOs (C-suite officers, this includes CEOs, CTOs, CFOs, COOs, etc.) were also more likely to be part of the pool.All our informants were founders, except for Ragnar from the consultancy firm.The downside of interviewing chief x officers was that their replies and explanations seemed to revert back to sales pitches at times, and that was the case even though we had made it clear the data would be anonymized. 65e decided to concentrate our efforts on robotics engineers rather than including pure AI software engineers because robots, and especially collaborative robots, are the types of devices that showcase the strongest AI modules considering the salience of the activity performed by the device (moving around industrial or hospital settings and interacting with human workers).While stronger AI modules can be found in legaltech, fintech, or medical analytics, we wanted to focus on robots with physical embodiment that may physically deal damages to persons or property.That choice to explore robots because of physical damages has in turn shaped the questions and analysis, and brought a focus on safety failures.Cobot engineers are arguably the ones that are most representative of how future generations of engineers developing street-walking robots will think and work on their creations.Technology wise, we focused on collaborative industrial robots, and tried to exclude purely industrial fenced robots.We also admitted professional service robots (robots within logistics, cleaning, health, etc.) and personal service robots (care, exoskeletons for daily purposes, etc.) We excluded purely digital technologies, one potential exception being Torsten's product, a software that analyses sensor data from a robot, and manipulates the robot's arm accordingly.We excluded drones (air-or waterborne) and products 64 cf Ewick and Silbey (n 49) 23. 65See section 5.4-Law and language conflicts.that are directly piloted, which pose different safety concerns from cobots.Most businesses that qualified technology-wise within Denmark tended to be fairly young and small.We eliminated businesses that were younger than 2 years, as those were unlikely to have a detailed approach to law, regulation, and industrial standards.Another exceptional case in our pool of informants is Ragnar, who has a managing position within a consultancy that helps robot businesses with their safety concerns.We felt his input was relevant because he is directly in touch with robotics engineers, and with the customers of those robotics companies.Sampling our informants was a difficult task.Our initial attempts to contact robotics clusters and incubators in Denmark, like Robocluster, 66 or Odense robotics, 67 proved unfruitful as we were met with apprehension and defensiveness from the managers of those groups.Our potential informants seemed weary when a legal scholar and a practicing lawyer were asking them to participate in a research project.Facing that difficulty, the process was facilitated by a personal contact of ours who works on a transversal robotics safety project. 68That person made introductions for us to a number of his professional contacts within companies that suited the profile we were looking for, and focused his introductions on our academic rather than legal practice backgrounds.
This led to a first round of eleven requests for interviews, for which we obtained three positive answers, and three rejections.Two personal contacts of ours accepted to participate.We then contacted the five persons that never replied from the first round of requests made by our contact, of which we obtained two additional positive answers.In a third attempt, we contacted ourselves nine CEOs from a new pool of companies, with three positive answers.None of the integrators we contacted accepted.One female CEO was contacted but she asked her colleague Sune to do the interview.Ten interviews makes for a small sample and we do not intend to generalize our findings.However, we believe that those ten businesses are representative of the industry in Denmark, and can be used for the purposes of this article.This is because this paper is in essence exploratory, with a population never interviewed for their legal concerns before, in an industry that has not been widely studied from a socio-technological and legal angle, whether in Denmark or in Europe more generally. 66< robocluster.dk/> last visited 27 August 2020. 67< odenserobotics.dk/> last visited 27 August 2020. 68< safearoundrobots.com/> last visited 27 August 2020.

Analysis
Our analysis is presented in four parts.The first section will focus on the way our informants perceived the wider integration of law as a component part of safety.In the second section, we will examine the informant's understanding of compliance regimes, where law to them is essentially a set of technical benchmarks that their final products need to demonstrably meet.The third section focuses on exploring the informants' perspective on contracts, what place they give to those legal items, and how they appreciate the value, protections, or limits of those legal tools.Finally, we look at elements of language difference between layers and engineers, and study the ways base concepts are understood differently by the two populations.Law as safety Our interviews revealed that engineers first and foremost place law as a subset of safety concerns.Safety itself is balanced with superior considerations for the functionality of the product, and for the economy of their commercial projects.Law as safety explores these themes through concrete examples like CE marking requirements, risks, testing, and machine behaviour.The main difference between law as safety and law as compliance is that law as safety looks at the technical angle of safety, and explores how engineers fit it within their work, and the legal values they ascribe to it.Law as compliance investigates a wider spectrum of compliance requirements that go beyond safety, and how they perceive industrial standards amongst other private governance tools in their engineering practices, including when accomplishing safety.There is some level of overlap between Section 5.1 and 5.3 , however we believe that the differentiation reflects the way the engineers frame themselves questions of safety in comparison to questions of compliance.Law as safety exposes the relationship that engineers construct between the need for a safe product, and the legal questions and values associated with safety.
Roboticists place safety in balance with other concerns.Halfdan says 'when you're designing products […] there's many concerns because it [has] to be safe, but it also [has] to be easy to produce'.Within the design process, the importance given to safety is low at first, before becoming more relevant.On that matter, he says 'The first step, [is that] I try to design the product, I don't start reading about safety.I'm just building safety that I know we have to do, all the basic safety.And when I have a product, then I will go to my colleagues [for help]'.He later summarized his thoughts 'the compromise is often, I could easily make it safe, but no customer will buy it because it's too expensive.' As such, safety is first and foremost a technical concern, it is a legal concern only at a secondary level, as Torsten puts it succinctly 'safety is what we normally think of when we talk about legal issues'.While they do place it at that level, they did not give further indications of where they placed safety specifically within the regulatory landscape.This could be observed for instance when they did not associate the concept of law to safety regulations they need to respect like the machinery directive, or standards that facilitate the compliance process.A few informants offered examples of specific elements their products needed to comply with for the CE marking such as maximum torque applied by a component, maximum force applied by a sharp object, amongst others that relate to the content of Annex I of the Machinery directive (2006/42/EC).However, they only mentioned those when asked about CE markings, not when they were questioned on designing safety for their products. 69Our perspective as lawyers is fundamentally different on those issues, where we tend to label safety regulation and their related standards as part of the legal spectrum.
Spurred by scholarship on the changing nature of risks with regards to AI towards a greater share of unforeseeability, 70 we asked our informants what place they gave to unforeseeable risks generated by their systems' AI modules.Magnus only considered unforeseeability as external to his system: Then there are risks of what will happen if the machine falls, for example.If somebody drives into it with a forklift, but that can happen to anything that [is] standing on legs, there could be a scaffold, there could be a pile of boards that get can collapse and fall over you.That would be some of the unpredictable things.
Our informants showed a lot of confidence in the strength of their redundant safety mechanisms, where those would either prevent the occurrence of a safety failure, or mitigate its consequences enough that no damage would be caused.We asked them if they were worried by what some authors have qualified as a changing nature of risks following the growing complexity of technology 71 especially with the introduction of machine learning types of algorithms in their products. 72Bo states on that matter: 'Every component will fail eventually, if you have enough running.You need to take that into account, but our case was very simple because if something failed then [the system would] just stop.' Bo was aware of the possibility of unforeseeable failures and gave an explanation as to why he and his colleagues sometimes refused to recognize it: 'You cannot go and say that you have unforeseeable failures, [because] that is very hard for the standard industry to accept.Because if it is unforeseeable, then you cannot explain it.'Bo's answer could even be used to explain the responses of his fellow CxO informants, who could not recognize unforeseeability out of commercial concern, a possibility we expressed earlier in this article.
Scholars have been wondering about the attributes of machine behaviour, 73 and about the agentic nature of machines. 74This is visible for instance with Saudi Arabia granting a robot citizenship, 75 or with debates on legal personhood and the granting of rights to robots. 76Fitting these debates into the commercial world of cobots engineers, we asked our informants if they ever considered whether their machines were able to act lawfully, to make lawful decisions, 77 or whether they were programming them to be able to do so: interviewer: 'You have the security system that makes the robot stop, but how do you make sure in the first place that the part of the robot that makes decisions is doing the good decisions i.e. is doing lawful decisions?Bo: We don't.We just put them on algorithms to move around.'Halfdan answered similarly while referring to 'the combination of the […] components' in their products.Asbjørn: 'lawful, you say?Of course we have all of the usual disclaimers in our manual.' This clearly shows a great gap between some debates in academia, and the commercial engineer's entrepreneurial Challenges for the Legal Regulation of Machine Behaviour' (Phd diss.University of Copenhagen 2020). 73 projects.In this case, lawyers (scholars, and us authors as well) could be seen as projecting a sci-fi infused human paradigm of analysis of law over the machines-focusing on the decisions and actions of (human) agents, and the consequences thereof.On the engineers' side, we could interpret their reactions as indicative of their lack of interest in such questions, or their lack of exposure to more metaphysical debates on the nature of AI.Their products are seen as the sum total of the components they put together, and nothing more.From their perspective, they are only building machines that need to be safe, they are not building machines whose behaviour needs to be safe or lawful. 78To them, there is no fundamental difference between older, mechanistic types of industrial robots that were fenced because of their inability to be safe when humans are around.However, this question of machine behaviour as a distinct and emergent feature of robotics, which goes beyond the mere sum total safety of the components assembled could become a relevant trope for analysis of the engineering practices.
In this section we explored the general relationship to law of our informants.We've been able to observe that they tend to consider law as a subset of safety, itself balanced with functionality and economy.This shows in their engineering practices where they tend to see testing as a variable of adjustment.Engineers and lawyers have a different understanding of the types of risks that are generated by newer technologies, where the former have strong beliefs in their ability to remedy all safety risks, and don't want to recognize the existence of unforeseeability.Those two populations also have different perspectives on the changing nature of technology and whether AI changes anything at a more conceptual level, it seems lawyers might be a bit too influenced by science-fiction, and it could also be that our commercial engineers are not ready to see themselves as building anything more than simple, mechanistic machines.

Law as contracts
In this section, we explore how engineers perceive contractual matters, and contractual relationships between the various actors of the value chain.It studies how safety is replaced in those relationships through the lens of liabilities and contractual content aimed at dealing with liabilities.Robots being complex modular products,79 there often are long complex value chains behind a final, integrated product. 80Halfdan notes this intertwining, and his reliance on integrators when he says 'If your product doesn't work or if you lied to one integrator, [then] everyone else, all the other integrators [will] know immediately [and refuse to buy your product].'However, producers also displace a lot of responsibility on those same integrators, who are the ones they believe to be responsible for the end product that is integrated into an industrial setting.Asbjørn states: We are able to do a CE marking of it, but then specifying as soon as [integrators] make some significant modifications [it] could be [that] they mount a different tool or something on there.Then they need to recertify it.Then they need to do a CE marking again, because if you put a sharp tool on the end of course safety changes a lot compared to if we do not have such a tool on.
Legal relations between the various actors of the chain of value are becoming more intricate, because of the growing complexity and tighter coupling of contemporary robotics,81 we asked our informants how they assessed their relations to their suppliers or clients, and whether they were seeing any evolutions.Bo stated 'We don't test every part that comes in, because it takes too long and that is up to the supplier.We try to make them legally responsible if it doesn't comply to the requirements there, but that's sometimes hard.' Considering the possible independent evolutions of the component parts he assembles together, Asbjørn is not worried that updates or other modifications from his suppliers would affect the safety of his product: 'for all [of] them, they have safety system[s that] are separate from the main control architectures […] I think as long as they are not changing their safety interfaces, then I think the safety system should be up and working as expected'.
Here, Asbjørn believes only changes to the safety components of his suppliers' systems would affect his own product's safety, whereas functionality changes would not affect its safety.This remark follows the stance that their safety design would always prevent damages, and also the one that only modifications made down the chain of value could generate real accident.He continues 'I think most of [those changes, ed.] would have more to do with the thing not working.Which of course could also give us a liability, not necessarily related to safety.'Thus, he recognizes some legal risks for his products, but is very confident in the safety of it, regardless of changes made by his suppliers, which he has little control over.
Most of the producers, including Asbjørn, build products that are designed to be modifiable, simply because they are not finished.Except for a few-Magnus and Sune-all the other producers were selling products that could not operate standalone, but needed some kind of integration with other parts.This could be for instance a mobile robot unit that is able to drive around, but that requires some kind of platform on top of it to be able to pick up and carry pallets.From their perspective, they are not responsible for the products once it is modified, even if the product is made to be modified.In practice, this means that they see the integrator as the one bearing the final responsibility for the whole products.We can explain this belief by looking at previous technologies that this population has been working on, namely the industrial, fixated, fenced, deterministic industrial robots we are used to seeing in factory settings.Since they do not see an evolution of the nature of risks created by their technologies, they have not needed to re-evaluate the distribution of responsibility along the chain of value.
Those beliefs crystallized into disclaimers of responsibility following modifications in their sales contract.Our practitioner perspective on this matter differs where we see a distribution of the responsibility between those actors, and where those clauses would probably be stricken out of a contract for imbalance and unfairness by a judge.We explored this question further by asking our informants if they would be concerned with recourse actions from actors down the chain of value in case of safety failure and damages.Their responses were that the only possible case for a safety failure was if their product was badly modified by an incompetent other actor, in which case they were not afraid of a recourse action, as their disclaimers would protect them from liability.All the informants had their companies insured against product defect claims regardless.
Bo stated on that matter 'if they go in and hamper with that one, they are changing the machine so much that it's not our [machine any longer].What we interfaced and what we allowed them to do could never affect the basic core.'While this attitude could be explained by engineering culture, another potential explanation is that our informants were CxOs falling back into a commercial stance defending the quality of their engineering, however that stance would not work well as a sales pitch to the integrators that are the actors who sell their products to users.This tension is will be further studied in Section 5.4.Halfdan mentions that even though they have identified a crucial safety failure in their product, they did not warn their customers about it.Instead they tried to fix the error in the software and updated the products they had already sold, hoping no failure would happen.Whether the informant was aware of the legal dubiousness of that decision was unclear.'Hopefully [no] customer will see that failure […] I don't think any customer [has] tried that failure yet.And we are trying to work backwards and […] fix the software and send it to the customers.'This perspective from Halfdan could be linked to comments made about losing trust from the integrators.Being transparent about the failure could bring shame to Halfdan and make him lose the trust of some of his customers, so he fixes it and hopes for the best in the meanwhile.From our perspective, this type of practice would at the least constitute bad faith, and could even amount to a form of breach of contract.
In this section, we detailed the informants understanding of various contractual objects and relations.They tend to displace responsibility down the chain of value, and make sure this is so by having contractual disclaimers against responsibility.Our practitioner perspective is that such disclaimers do not offer the kind of legal protection the informants believe it to, whether up or down the chain of value.This could be because they do not see a change in the nature of risk brought by collaborative robotics, and by artificial intelligence.

Law as compliance
In this section, we explore our informants' perception of regulatory compliance.This includes matters like the machinery directive's requirements for testing, for CE marking, etc. Law as compliance shows the extent to which regulatory matters are a legal necessity in their eyes, but one that comes under the form of a check list, and where there is no need to do better than good enough.While it deals extensively with matters of safety, 'law as safety' is different from 'law as compliance' in our informants' eyes, as the first follows a logic of obligation of results, whereas the second one is an obligation of means.
While the group was homogeneous in blaming the integrators, their understanding of the legal and technical value of safety standards was not homogeneous.Halfdan recognized he did not know everything about safety or about regulation, and consults other agencies to help deliver on that aspect: Then I will go to my colleagues or even, for example, contact [consultancy company A] and have a meeting with them.And then ask 'ok, what kind of regulation do I have to observe here?'Because to be honest, I don't know all these rules.But I know a lot of basic rules and that's what I've taken care of the first time, in the beginning.
For Asbjørn, compliance to industrial standards entails compliance to the law 'once we comply with those [standards] […] we have done what we need to do'.Asbjørn is also the only candidate that gives a probative value to standards compliance in a liability context 'if you comply with the standards, you also basically shift the responsibility for proving if something goes wrong.'Other informants did note a compliance proof value to standards, but not one for liabilities or judicial assessment.
What was more surprising was the precision of his language in his declaration.'That if you follow the standards and something goes wrong, they need to prove that it's because you didn't do a good job.But if you don't follow the standards, then you need to prove that you did a good job.So there is this […] [reverse burden of proof].'He later recognized 'I don't have a strong legal background so I don't know if that holds up'this backtracking might have been influenced by his knowledge of the legal expertise of one of the two interviewers.Halfdan on the other hand did not give them a legal value.He added later 'You will have [fewer] mistakes in your programming [when following that standard].But I don't think there [are], you know, [any] directly legal things in it.'Asbjørn's statements served as an important contrast.Potentially, it is his former academic researcher position that can explain this exceptional awareness.
While informants give a lot of value to industrial standards as safety guides they are also willing to compromise on their respect.Halfdan says 'we are really trying hard to follow these rules.But, you know, sometimes you have to compromise' which he then defines as fencing off uses of the system, or making warnings in the use of his system 'if you can't solve it, the mechanical or [the] software, then you have to really make a good use [of] a manual to describe how to use the machinery.'One informant, Bo, recognized some legal value to standards, but expressed that they were only an option for showcasing compliance to EU regulation, and where showcasing this compliance could be achieved by other means.Surprisingly enough, some informants also stated that their customers were more demanding in terms of safety than the industrial standards and European regulations were.
Halfdan showed his awareness of unforeseeability while limiting to misuse by his clients, and expressed his unwillingness to deal with it through testing procedures: 'The hard thing is to test everything.And you can't because you are in a way limited, because our customer always [uses] or thinks in a different way than you are testing.And that's a hard thing.'Testing the product's safety is also the first sacrifice when facing limited resources.Halfdan recognizes the issue when he declares: So you have only a limited time to test, because you are working day and night, day and night.And when it's finished, then you are testing maybe for four hours or eight hours.But maybe I should have been testing for three weeks.But you don't have time to test enough.And that's the big issue.
They did not see a legal value in testing their products, whereas a practitioner would see this as protection from product defect claims.Bo said 'I think that the practical [value of testing and assessing] is that we get a better product.[…] But on the legal term we have not really considered that, other than the [industrial] standards.' In this section we exposed our informants' perspective over a number of compliance matters, and examined their relationship to private governance and industrial standards.We can observe that they feel satisfied with a 'good enough' approach to compliance which does not necessarily aim to go beyond what is prescribed by the law.This is of course understandable when the population has to balance different requirements such as compliance, safety, functionality, and running a business.It is only with customer demand that they go further on safety matters for instance.

Law and language conflicts
In this section we try to identify the most visible points of divergence in thinking legal matters when comparing informants and practicing lawyers.We use this as a springboard to also expose some of the outstanding elements we identified in the process, for instance when it comes to how AI is perceived by the informants.We also discuss our informants' language when talking to us during the interview process, and why they could sometimes fall back in a type of marketing speech.Torsten's company develops AI based software that helps specific types of robots better analyse sensor data, and calibrate more precisely their movements to accomplish their tasks.His-software essentially plugs onto another robot and takes control over a host of types of movements to accomplish certain actions.When we contacted him he first wrote to us 'Whether [my company] is the right company to ask is another matter.We don't sell robots and do [not do] final installations at the customers but we may have input to the legal concerns.'This was an interesting comment because he was quite attuned to all the concerns and worries that 'real' robot producers had, and also because his software does not have an existence out of robotics.It is also interesting to compare to what lawyers have been including under the term 'robot' who tend to include digital entities under the term robot as well, 82 whereas the notion of robot is purely physical amongst our informants. 82cf Turner (n 8); Balkin (n 7); Hildebrandt (n 8).
We initially asked them if their systems were AI, which most would answer no, then asked if their systems had machine learning algorithms, to which six of them answered yes.The four informants left unanimously expressed their intention to implement it in future products, Halfdan for instance explained: 'in ten years' time, you want to survive in this branch [then] you are building in some kind of AI'.Asbjørn ' Not a lot at the moment.I think [AI] are some of the things we slowly want to get into.'We interpret this first as a hesitation to qualify their systems as AI, as the technology remains somewhat intimidating.In their minds, AI is not necessarily fully mature and therefore qualifying their system as AI could make their product seem immature.It also showed that despite these conflicts, they still plan to integrate and develop the technology further, another indication of the future massive deployment of AI and robotics in society.
All of those who do have AI in their products as of now stated that they used AI for perception purposes only-visual, audio, voice recognition, etc.They further specified that the AI module being perception only, the 'AI' did not make any decisions for their systems, and as such it could not fail or create risks because a more deterministic software was making decisions for the system's behaviour.While they qualified machine learning as AI, they would not qualify their products as incorporating AI because the machine learning modules were restrained to specific aspects of their systems.
Bo for instance was very insistent that it was 'perception AI' rather than what he called 'decision AI'.As Asbjørn puts it '[AI] is very much associated with the perception system and not in the decision making.'Most of them also mentioned that their redundant safety mechanisms would cope with any mislabelling by said perception AI.One thing that is interesting to note is that they consider their AI doesn't make decisions, but also state that their safety mechanisms prevent any mislabelling by those 'perception AI' from creating any damages by influencing the decision-making part of the system.We had trouble finding an interpretation that could reconcile those two statements, in part because our perspective is that the 'decision' part of their system is not capable of having a critical appraisal of what the perception AI is labelling, therefore making the input very deterministic over the decision-making system.
This stance of differing 'perception' from 'decision' AI can be explained by their lack of trust in the technology at this stage.Bo: What we did [was that] we tried to put AI on as a second layer, so we [wouldn't] rely a hundred percent on the AI.So you can [view] AI as a supplement that makes the product better, but not as one that is needed.So if the AI fails, we still have the basic system running, because we don't trust [it] that much.
Bo observed further that while he did not trust his own robot that much, he saw that his customers trusted it more than they rationally should.
It is unbelievable how trustful people are [of] robots.When you have the big robots moving around, people just walk out in front of them and [expect] them to stop.Sometimes I'm really amazed [and] say 'oh my god.What if they didn't work?' but they just trust in it.They would never trust that if it was a truck driver.
Bo's comment should be somewhat mitigated in the light of other declarations he made describing very ingenious methods his company developed to make the behaviour of his robots somewhat human-like, a feat he was very proud of and accomplished with the objective of making his product behave and take trajectories in a way that would seem more biological and predictive to human collaborators: he was actively trying to make his customers trustful of the robot.
This declaration also cuts very incisively with the tendency other informants had to market their products to us.This could be explained by the fact that Bo was no longer employed at the company that produced the robot he was talking about and therefore didn't have a pressure to always talk positively of his product.This may help explain why other CxOs tended to fall back into this marketing discourse: they are always on the job, defending their product.Even if we assured them they would not be identifiable through the interviews, maybe some of them had a hard time of letting go of the possibility that their product could somewhat be identified from our research.Additionally, while they knew they would not be identifiable as individual persons, products, or companies, they could have fallen back in this discourse through a certain sense of community, defending either the prestige of Danish robotics, or even more precisely, defending the merits of the Odense robotics community.
While Bo states his lack of trust in AI very clearly, he is not the only one that makes this type of comment.As mentioned, there was a generic resistance to qualifying the product as AI even if it did have some machine learning in it.This lack of trust somewhat conflicts with the unanimous declarations of the informants wanting to further implement AI in their business, or products.While they do not put words on this fear very clearly beyond that they feel that it is not necessarily mature yet, they are both scared and attracted to it.One explanation could be that at the time of the interviews, the European Commission's draft AI act had not been published yet.
The lack of clear regulatory framework on AI could mean that the producers would not feel comfortable engaging in the technology without foresight into whether they would have to scrape their projects because of unresolvable path dependencies making the product illegal.Another explanation taking its root in the 'perception' vs 'decision' AI discourse could be that the producers sense that AI is agentic and is creating a semblance of autonomy, giving systems a certain measure of discretion in finding the best way to solve a specific mission.That they make this difference between perception and decision while emphasizing that their AI does not make decisions could be the clearest indication of that lack of trust.They feel uncomfortable delegating control to a part of the machine that they have not written down by hand from beginning to end.
In summary, this section explored the conflicts of understanding that can be observed between the informants on one side and lawyers and policy makers on the other.We have observed that lawyers and roboticists do not agree on what a robot is, or what constitutes perception or decision AI.We have also seen that the informants are reluctant to call their robots AI even when they possess AI modules, whether this is because of their distrust of AI or the feeling of taking risks by calling it that way is not completely clear.

Discussion
Rather than legal concerns, engineers seem mostly preoccupied with safety.If a system is safe, they will not be liable for accidents that do not occur in the first place.Even their contractual concerns are framed mostly as a safety concern.The simultaneous increase in safety and in unpredictability of failure in machines could challenge this definition of safety, which may also lead to changing the way engineers approach the tasks of design, testing, and manufacturing.However, it is probable that the safety-functionality-economy nexus that is at the centre of the producers' concerns could be put in question once an advanced cobot generates a costly, media covered, unforeseeable failure.If our informants' declarations on compliance to industrial standards is any indication, it is likely that the roboticists will not have done anything objectively wrong in the design of the defective product, but will also probably not have gone much further than 'good enough' state-of-the-art safety.These observations tend to nuance the theory proposed earlier that engineers re-evaluate engineering approaches, 83 where they only re-evaluate to a certain extent, and prioritize economy and functionality before safety.Nevertheless, we lawyers are over-worried with safety failures and liability cases, whereas cobots producers know their redundant safety systems to make secure products.We are overworried because of the shape of our training, we only ever hear about this or that tragic accident during our legal studies.To better align our languages, lawyers should try to step away from this tendency.Lawyers see that the origin of safety is the legal regulation that requires specific thresholds of safety; whereas the roboticists do not see the necessity or value in mobilizing law. 84Surprisingly, that refusal to mobilize law is rooted in the identity aspects of their regulatory consciousness: they recognize that they do not do law, yet they also feel very confident to figure out their legal issues without needing lawyers, for example when making disclaimers in their contracts. 85From a hegemony perspective engineers have disdain for law, 86 they're at best bothered by it, which could be characterized as a form of resistance, especially when those same engineers refuse to recognize the shared domain of competence over technology that lawyers and engineers have.
The first thing that lawyers and policy makers can learn from our data is that they could better reach engineers (roboticists or others) by changing the way we formulate legal concerns so that they can fit into the safety nexus without being over-worried with liability. 87One of the things we expe-83 Nichols (n 15). 84Ewick and Silbey (n 49) 18. 85 cf Chua and Engel (n 54) 338. 86ibid 339. 87On mapping values and reorienting governance and policy making work, see in this special issue: Geoff Gordon, Bernhard rienced from trying to reach out to engineers on the Fyn Island was the wariness they had of us as academics and practicing lawyers.It was only by presenting ourselves as researchers from the University that we managed to convince them to lower their guards and talk with us.Lawyers spell problems in their minds, whether it is when we try to meddle with what engineers can or cannot do with their designs, or when we force them to deal with outstanding legal issues.By tuning our language and narratives to the safety-nexus described in the analysis, it is plausible that we could alter their perception of lawyers as troublemakers, and develop ourselves as value-adders.In this perspective, ethical initiatives such as that of the AI high level expert group on trustworthy AI are valuable and could gain even further from obtaining some form of legal recognition.If those ethical-legal approaches also integrated clearly the safety-function-economy nexus approach, they would probably become much more attractive to tech producers.
In sum the data showed that our expectations were somewhat inaccurate.Safety is not a standalone concern for engineers, although a part of this safety concern is indeed for legal risks, even if it proved to be at a much more distant level than we expected.We expected they would place safety at a higher level of concern, but the low degree of importance they attributed to risk assessment and testing practices lets us think otherwise.In their discourse, there is no fundamental change in nature of risks brought about by AI, even though they sense something is different and cause for worry with the technology.The data also showed that our expectation on chains of values was wrong: producers did not accept the legal risks that come with long chains of value, they however do have legal strategies to deal with those risks by disclaiming against changes.Finally, the informants did not consider legal regulation and industrial standards as fundamentally different, and they seemed not to attribute much legal value to industrial standards.

Conclusion
This paper showed that while policy makers, legal scholars, and roboticists share a number of legal concerns, the roboticists cognize most of those concerns as safety ones that are incorporated with and inferior to economic and functionality concerns.This matters because it exposes the difference in understanding of safety by the practicing lawyers on one side, and by the roboticists on the other side. 88The paper also showed that producers do not discuss risks generated by AI as changing in nature, but they do work with the effects of that change in practice, even if they don't necessarily describe it this way.One way they do so is by diffusing their responsibilities to the actors below them in the value chain.Taken together, this shows the gap in importance given to safety failures by our informants, and with our own perspective as practicing lawyers.Another important finding was identifying the low degrees of trust that some of the informants had for AI.
Being able to chart the differences in understanding and language between practicing lawyers and roboticists has multiple layers of value.From a research perspective, this newfound alignment could help frame more relevant research questions when studying either technology law, the engineering profession, or lawyer profession.For instance, legal research could focus on the contractual and extra-contractual distribution of responsibility along the Chain of Value with the understanding of specific contractual practices within the industry.From a commercial angle, this piece will also help practicing lawyers and engineers better align their expectations and service offerings.Hopefully, lawyers will become better able to communicate with the engineers, and by doing so deal with their problems more effectively.For instance, lawyers should understand that safety is a work in progress and not a binary system, it is not necessarily just a question of whether a safety failure is going to happen, but also a question of balance between concerns held within the safety nexus.Finally, at a policy level, our research has identified a certain number of pain points that could be relevant to solve for the legislator.For instance, the upcoming AI act should take notice of the difference made by the engineers between perception and decision AI, and make sure that this type of reasoning does not become an argument to opt out of the high risk compliance system.

Declaration of Competing Interest
Léonard Van Rompaey and Robert Jønsson were both employed by the Danish law firm CO:PLAY, which has commercial interests in the Danish technology industry.
Rieder and Giovanni Sileno, 'On Mapping Values in AI Governance' (2022) 46 Computer Law & Security Review 105712.88Lawyers at the same need to face evolutions in their own profession that are brought by AI, see for instance the question of automated contracts also in the special issue: Maria José Schmidt-Kessen, Helen Eenmaa and Maya Mitre, 'Machines That Make and Keep Promises -Lessons for Contract Automation from Algorithmic Trading on Financial Markets' (2022) 46 Computer Law & Security Review 105717.

This in turn triggered wider conversations on AI and robotics law.
David King, 'Putting the Reins on Autonomous Vehicle Liability: Why Horse Accidents Are the Best Common Law Analogy' (2018) 19 North Carolina Journal of Law & Technology 127; Laurène Mazeau, 'Intelligence Artificielle et Responsabilité Civile : Le Cas Des Logiciels d'aide à La Décision En Matière Mèdicale' [2018] LexisNexis SA 38; Price Nicholson, 'Artificial Intelligence in Health Care: Applications and Legal Implications' (2017) 14 The SciTech Lawyer; Henry Prakken, 'On the Problem of Making Autonomous Vehicles Conform to Traffic Law' (2017) 25 Artificial Intelligence and Law 341; Harry Surden, 'Artificial Intelligence and Law: An Overview' (2019) 35 Goergia State University Law Review 1305.serve to illustrate what lawyers (whether they are scholars, practitioners, or policy makers) consider legally concerning or important when it comes to AI and robotics.
77On making machines make law-related decisions, see also in the special issue: Wachara Fungwacharakorn and Ken Satoh, 'Toward a Practical Legal Rule Revision in Legal Debugging' (2022) 46 Computer Law & Security Review 105696.