Keywords

1 Project Framework, Preliminary Work and Motivation for the Workshop

The co-creation workshop described in this chapter is part of the project “Digital Sovereignty in Industry” addressing the design of transparent and informed working conditions for skilled workers in industrial work environments who use complex, digital systems. The iit Berlin is running the project since 2019. Ahead of the workshop, the institute analyzed current trends and innovations for a deeper understanding of digitized industrial workplaces: In 2019, interviews were conducted with experts from academia and industry who engage in the design of user-centered digital workplaces (Pentenrieder et al. 2020). In order to complement the expert opinions with a practical perspective, the iit conducted workplace studies at companies in the German tool making and mold making industry in 2020. Ethnographic methods (Suchman 1995) helped to explore the use of digital technologies in everyday working routines. During visits to several medium-sized companies (SMEs), workers were accompanied in their everyday work practices. They were interviewed directly in their work environment about their wishes for and concerns about a digital future (Pentenrieder et. al. 2021). The user-centered approach revealed, on the one hand, concerns about a lack of transparency and about larger dependency and restrictions on work practices due to increasingly complex IT systems. On the other hand, the employees noted that there are not yet enough best practice examples for complex technologies that enable skilled workers to stay knowledgeable about the workings of complex machines. Clearly, explainability and controllability play an even greater role once systems start using software components containing artificial intelligence (AI).

Based on this information from the practical work environment, the iit Berlin developed a workshop format to inform practitioners about innovative technologies in a user-oriented way and to make the technology discussable between different stakeholders. It was the interest of the practitioners that AI components and their value for daily work routines are discussed (a) in the context of particular applications. Only this closeness to industrial use cases allows the evaluation of the new technologies´ specific value for the working environment. Furthermore, (b) new systems must be usable for teams with different levels of knowledge and different technical backgrounds. Based on these needs, the authors developed a participatory online workshop and subsequently tested it. In all phases, co-creation principles and participatory design formats were applied (Prahalad and Ramaswamy, 2004; Cech 2021).

2 Theoretical Basis for the Workshop

In line with the preceding assessment of user requirements in the German toolmaking industry, the workshop on December 1, 2021, aimed at testing a format to elaborate the explanatory and controllable capabilities of complex IT systems. The workshop embedded digital methods that were increasingly used during the Covid-19 pandemic (for the workshop concept, see Pentenrieder et al. 2021; Pentenrieder and Hartmann 2022). The event software “WebEx” was combined with a digital whiteboard of the collaboration platform “Miro”. The workshop was open to different user groups and free of charge. This allowed everyone interested in AI systems integrated into industrial work environments to participate in the workshop in order to develop scenarios for explainability and controllability for different user groups. Representatives from business, politics, and science received explicit invitations to the workshop.Footnote 1

The workshop aimed to analyze workplaces as sociotechnical systems. Therefore, the participants’ ideas and experiences were arranged along a matrix of socio-technical aspects. As a second method, graphic recorders accompanied the discussions and visualized aspects of the working environment. They focused the content of the graphics, particularly on situations where new explanations were needed. The participants could see and comment on the visualizations live alongside the discussion (see Fig. 2).

Fig. 1.
figure 1

Interactive matrix based on sociotechnical aspects (own illustration)

The matrix’ structure relates to work in the field of ergonomics, for example by Mumford (2006) concerning a sociotechnical approach based on values of democracy (cf. Also the Scandinavian concept of ‘industrial democracy’, Emery and Thorsrud 1982) and group dynamics concepts (Mumford 2006). The ‘original’ school of thought dates back to the Tavistock Institute of Human Relations (Trist and Bamforth 1951; Mumford, 2006; Cherns 1976). The core postulate of this approach is “the joint optimization of the social and technical systems” (Mumford 2006). This implies (at least implicitly) a two-tiered structure of sociotechnical systems, with both social and technical sub-systems. More modern approaches select three sub-systems: Technology, Organization, and People. Organizations as social entities and human beings are considered here as sub-systems of organizations; this approach is also applied in the matrix in Fig. 1. Besides these basic concepts from sociotechnical systems theory, the matrix focuses on transparency, explainability, and controllability. Controllability is split into efficiency and divergence, in the sense of psychological control and action regulation theory. In this context, efficiency refers to an environment where specific actions lead with high predictability to specific results. Divergence means that different courses of action leading to different ends are available (see Fig. 1; Oesterreich 1981; Hartmann 2021; Pentenrieder et al. 2022; further references can be found in the literature).

To put these theoretical ideas into practice, the workshop used graphic recording as a method that carries both technical information and social knowledge in the same way:

Fig. 2.
figure 2

Graphic recording as an interactive method to discuss work environments inclusively (own illustration based on Visual Facilitators)

The approach of artifact analysis, according to Lueger and Froschauer (2018), enables a closer look at the socio-technical system of AI-supported processes in organizations/companies to understand the concrete meaning for technology, humans, and organizations in detail. The two authors define artifacts as a mediating and coordinating instance (e.g., road signs) between actors as well as a supporting and orienting instance (e.g., medical prostheses) (cf. Lueger and Froschauer 2018, pp. 25f.). AI-supported software systems within a company can be understood as such an artifact. As such, the AI-supported software systems merge with the knowledge of the respective participants using them (Lueger and Froschauer 2018, p. 23; cf. Also Bateson, 1985, p. 582; Carroll & Campbell 1989). Lueger and Froschauer attribute great potential to artifacts for indirect learning processes, “in that we can save ourselves laborious research, experiments, or experiences because others make their insights available to us.” (2018, p. 22, own translation).

Artifact analysis helps to investigate the structural influence of artifacts on individual actions as well as on interpersonal relationships (cf. Lueger and Froschauer 2018, p. 78). It raises questions like:

  • Which actions are delegated to the new AI system – and thus change the actions and competencies of the humans that surround the technical devices?

  • How must the actions of humans adapt while interacting with the systems? What further training should be offered?

  • What kind of support do users of AI systems need? At the beginning of the day, the machine could ask how much support is needed: little to a lot as by the user’s choice (e.g. selectable through a slide control, see Fig. 6). During the day, the machine could keep asking back, whether more or less support is needed.

  • With regard to the different groups of people who use the technology within a company, it is also important to consider in which situation the new system will be used: Which user (group) needs which kind of functional interface and appropriate information from the system? Typical application characteristics of the technical system in terms of places, times, social circumstances, events, or processes must be considered as well.

  • In which contexts could the AI system be used? How does it differ from currently used artifacts and techniques? What is the added value of the new technology for the investigated context?

  • To what extent do artifacts structure and influence social settings?

  • Structuring of social spaces and associated calls to action

  • Structuring of action sequences

  • Influencing social relations

  • Influencing communication

  • What forms of conception and history of acceptance are associated with the artifact?

  • Integration into cultural developments

  • Function of the specific artifact

  • Context of the discussion about (similar) artifacts

  • How does the conception of the artifact change…

    • … conditions for acceptance?

    • … temporal processes with regard to social or cultural integration?

  • The importance of integration for social coexistence (Is the new technology automatically the better solution? > Who does it exclude? > What interfaces exist between old and new methods?)?

  • Stories about previous artifact versions (involve senior employees and using their wealth of experiences)?

These questions show that the social dimension of a technical system does not stop at the user interface and, respectively, at the display of a system. The organizational level also needs to play a part in the design of the explainability and controllability of a system. Examples are the availability of social support at the workplace in using the AI system and also the involvement of the works council in integrating new software. In addition to the software implementation, continuous training must be considered, financial resources must be provided, and additional effort needs to be applied in order to enable explainability and controllability. To that end, co-creation workshops are ideally integrated into design methods of intelligent systems and thus create additional value. With this in mind, this paper shows how specific characteristics of intelligent systems can be developed with the suggested workshop format for three exemplary applications.

3 Case Studies, Discussion, and Results

Case study 1 (Fig. 3) was provided by Dortmund University of Technology and features an application from the brewery industry. The project wants to trace back the taste quality of beer to its ingredients and to the parameters of the brewing process. Therefore, the software system processes more than 50 parameters for the quality of water, hops, malt, and yeast.

Fig. 3.
figure 3

Predicting the taste of beer with AI (own illustration based on Visual Facilitators)

Case study 2 (Figs. 2 and 4) focused on the various maintenance processes of a bottling machine and was organized by the University of Stuttgart and the Fraunhofer Institute for Manufacturing Engineering and Automation (IPA). Both organizations develop a new digital system to facilitate the setup processes of bottling machines. A guided error analysis supports workers with different levels of knowledge in their tasks during maintenance.

Fig. 4.
figure 4

How could the change of a bottle format look like for the maintainer? (own illustration based on Visual Facilitators)

Case study 3 (Fig. 5) was from the automotive industry, provided by the University of Bamberg, Fraunhofer Institute for Integrated Circuits (IIS), and Continental AG. It deals with new software that supports engineers in programming the locking system of a trunk lid. The particular challenge is that, as in the case of the brewery, many parameters affect the result and have to be traced back. Factors that affect the closing function of the trunk lid include tilt of the car, outside temperature, and humidity. The trunk lid must close securely under all conditions but also let go when, e.g., a finger is in the way. Finding the right combination for this is a major challenge. Aspiring engineers are to be supported here by an AI system that is fed by the knowledge of more experienced engineers.

Fig. 5.
figure 5

Parameters of for the locking system of a trunk lid (own illustration based on Visual Facilitators)

All three projects work with AI-based technologies and at the same time pay special attention to the explainability and controllability of their systems.

3.1 Integration of Different User Groups

All three case studies have in common that experienced specialists contribute their experiential knowledge to the machine learning process of AI systems and pass it on to less experienced colleagues. Furthermore, in all examples, workers with low to high technical expertise encounter very complex systems that are not readily self-explanatory. Thus, explanations are absolutely necessary for a safe operation of the systems.

Fig. 6.
figure 6

Multiple views of different user groups. Who needs which explanation?(own illustration based on Visual Facilitators)

Figure 6 shows the result of the discussion about the multiple views that support different user groups in finding an explanation or controlling the system. Each user might have different questions for the system; therefore, any explainable AI is guided by each user differently. Moreover, information requirements vary depending on the role of the user. The graphics on the left side of Fig. 6 exhibits how information changes depending on whether the system addresses management staff, trainees, or trained users.

The management sometimes does not have a technical background but needs the condensed interpretation and main messages from the data and, at best, application examples – like data thresholds of decisions (see f.e. counterfactual explanations by Wachter et al. 2017) – to be able to communicate the meaning of a decision made.

Trained professionals will also use the system. They need additional views for interpreting individual data points, for example to identify (causal) relationships in complex data. The participants developed transparency strategies for this purpose (see top right in Fig. 6). In complex systems, a parameter never stands alone but is dependent on influences by other parameters. The case studies all concluded that with the use of artificially intelligent systems, the (visual) representation of independent and dependent parameters – whether in relation to the trunk lid or in relation to the taste of beer – is a key challenge for interface designers.

A third and different view is required for new employees who need training in interpreting output parameters. For this purpose, the participants developed the idea of an interactive element for new employees that allows them to move parameters and directly observe and understand the effect of changes in the system (see Fig. 6).

3.2 Embedding Human Skills in Standardized Digital Production Processes

Existing debates about which kinds of work can be automated continue in the development of explainable and controllable technical systems. Highly standardized processes and processes to which e.g. lean management methodologies have already been applied and which follow more rigid rule sets are easier to automate and consequently also easier to explain than complex processes or processes that have not yet been subject to process standardization. Accordingly, it is important that processes in which complex digital systems are to be used first undergo, ideally iterative, standardization. The company has the responsibility to be aware of these facilitators at an early stage instead of perceiving AI as a panacea and making excessive demands on the implementation.

A complementary aspect in the discussion concerns the tasks and roles on an organizational level. First of all, clarity must be achieved about the area of responsibility of the specific worker. To reach better explanations within automated processes, the person’s tasks and roles in the process must be defined. For IT developers, this is usually an indispensable basis for assigning “rights” to a machine, so that the person can then be assisted in a task-appropriate manner. For this purpose, a process must be clearly divided into steps and the corresponding roles for planning, monitoring, and execution. In the workshop, possibilities were discussed, regarding how an explainable AI system could also pave the way to further qualifications and thus actively support the users in on-the-job-training (see Fig. 6 the training to interpret parameters).

3.3 Data Protection and Ethics

As with any form of data collection in the work context, fundamental questions of data privacy arise when collecting and processing data by and for AI systems. This is also the case in one form or another in the three projects discussed. If AI is fed with data from video surveillance of the work of experienced workers, concerns for privacy and ethical issues of performance monitoring arise.

When it comes to IT knowledge, developers of AI systems must also critically reflect on the assumptions that they consciously and subconsciously make about users. In case of doubt, certain biases arise from prejudices, which can be an obstacle to the design of a sustainable acceptance of the system. This can also be something as banal as the distinction between right-handed and left-handed people, which can lead to actions by one of the two groups perhaps not being recognized by the system. Organizations and system developers should not guide themselves by the principle of the ‘dumbest possible user’, but should actively deal with the needs of the users and design systems in such a way that they provide optimal support to all users and thus lead to user empowerment.

4 Outlook: Potential for Further Development

The aim of the workshop was to make AI systems accessible to a broader audience and discuss in an open manner how AI is involved in working routines and where explanations are required. The lively discussions showed that explanations of complex AI systems should be designed with the participation of different user perspectives. Such a process can be successfully supported by socio-technical methods like the analysis matrix presented here and graphic recording. In the following, the results of these methods are discussed with regard to further workshops based on the experience from the first workshop.

Graphic recording as a starting point for interaction design?

Graphic recording was very helpful to facilitate the exchange across disciplinary boundaries by allowing participants to share their ideas with each other using visual aids. The workshop led to the experience that visual results can be used for the joint development of user interfaces and the discussion of working with complex technologies.

The method of graphic recording supports the participation process in two dimensions. First, the graphics bundle the ideas from an open discussion. The method serves as an instrument to structure the workshop by focusing on common views and technical details at the same time. The graphics (see Figs. 27) developed during the workshop serve as a projection surface for discussion subsequent to the workshop. As a second function, the graphics show how explainability and controllability could work in a specific situation – therefore they serve as a product for further work of interaction designers.

Furthermore, accompanying the workshops with graphic recordings, on the one hand, enabled an exchange about technical details but, on the other hand, also led to a transfer of ideas between the case studies. Moreover, it enabled the elaboration of generally valid quality criteria for explainable AI. These boil down to different user groups needing access to internal information of complex systems and having to be addressed differently by the interface. An appealing user interface is essential for the communication with users who are not technicians. Also, the visualization of complex data streams requires new creativity: It is a challenge to balance between complexity reduction and representation of the actual complexity (see Fig. 7).

Fig. 7.
figure 7

Balance between reduction of complexity and display of actual complexity (own illustration, based on Visual Facilitators)

Testing the online participation format based on collaboration and graphic recording is one possibility for bringing together diverse perspectives and expertise to address needs for technology design. This confirmed our assumption for the workshop that visualizations in particular can be used to address and activate different user groups to join a technical design discussion. The specific question of how to design a button on the user interface and where to put it makes interface design accessible to people who might otherwise not feel confident enough. However, one of the most important lessons learned from the workshop is to plan and specifically invite people as heterogeneous as possible to achieve truly interdisciplinary participation. Especially for online formats, special attention must be paid to the participants´ commitment by issuing personal invitations.

The workshop examined the following opportunities for graphic recording. At the same time the aspects are also challenges for the further workshop development.

Graphics…

  • create a tangible basis for follow-up discussions

  • are easy to understand

  • promote a strong reduction of what the system actually is

  • may reproduce but also illuminate unspoken basic assumptions and biases

  • make it (presumably) difficult to present issues such as privacy or user autonomy

Regarding the 3 × 3 analysis matrix – combining the sub-systems (technology, organization, people) of sociotechnical systems with aspects of transparency, explainability, and controllability – it became evident that this is an excellent tool to structure questions and demands identified by the workshop participants. In the future, it will be investigated whether this conceptual structure is usable for developing a formal method for the evaluation and certification of AI applications at industrial workplaces.

Who should participate in co-creation workshops?

The workshop tested a contemporary method of exchanging views on what explainability and controllability mean to different groups of users and what such explanations should look like when it comes to complex IT systems in industry. One important finding was that the programmers who are designing the system’s technology should not be responsible for explanations as well. Rather, a level of mediation is needed that translates technical aspects into the language of different user groups (and takes into account their social, e.g. organizational and individual needs/dependencies). Still, there are not enough levels of mediation integrated into the design of XAI systems due to time and financial restrictions. The aim of the workshop was to develop example scenarios in which the explainability and controllability of IT systems can be discussed.

Especially digital workshops have the potential to integrate non-technical participants into discussions of technology design. This group is essential for the development of explanations because everyday users mostly do not have the technical background that programmers of the software have. Users without an IT background therefore often have to ask questions about the system’s way of functioning. Consequently, it is particularly necessary to receive their valuable opinions so that a wide range of users can understand complex IT systems in the future. For industries such as mechanical engineering, participation can also address problems of skills shortages.

Future workshop formats should focus even more on problem-oriented solutions: The workshop discussion should be initiated with a problem statement or needs and wishes for improvement of shop floor workers using a specific software technology. Questions like What support would be helpful for your everyday work? What software components cause problems in your everyday work routine? support this discussion. To conclude, the general advice should be to try to solve a problem with the simplest possible technological device (possibly without AI). However, if the best solution seems to be explainable AI, it is highly recommended to choose a participatory approach for its development.

Recommended questions for further workshops may include, but are not restricted to:

  • How can I implement this workshop format as a company or as a consultant in the company?

  • Which company departments should be involved in such a workshop (e.g. research and development, human resources, managers, workers, employee representatives, etc.)?

  • How can the graphics from the workshop be productively used for the subsequent process of interface/interaction design?

  • Which costs and benefits result from integrating participatory workshops for the software implementation?