From Meaning to Expression: A Dual Approach to Coupling

: Coupling is a key concept in the ﬁeld of embodied interaction with digital products and systems, describing how digital phenomena relate to the physical world. In this paper, we present a Research through Design process in which the concept of coupling is explored and deepened. The use case that we employed to conduct our research is an industrial workplace proposed by Audi Brussels and Kuka. Our aim was to enrich this workplace with projection, or Spatial Augmented Reality, while focusing on operator interaction. We went through three successive design iterations, each of which resulted in a demonstrator. We present each of the three demonstrators, focusing on how they propelled our understanding of coupling. We establish a framework in which coupling between different events, be they physical or digital, emerges on four different aspects: time, location, direction, and expression. We bring the ﬁrst three aspects together under one heading—coupling of meaning—and relate it to ease of use and pragmatic usability. We uncover the characteristics of the fourth aspect—coupling of expression—and link it to the psychological wellbeing of the operator in the workplace. We conclude this paper by highlighting its contribution to the embodied interaction research agenda.


Introduction
This paper is about coupling and how it drives design for interaction. The concept of coupling is a recurring theme in the knowledge domain of embodied interaction. In previous work [1], we formulated a definition of coupling based on the MCRpd interaction model [2] and the Interaction Frogger Framework [3]. In our definition, coupling is the relationship between different events that make up a user-product interaction routine. Events are representations of digital phenomena in the real world. These representations can have a physical or a digital character. For example, a display hangs on the wall and shows an image of a landscape. The display is controlled via a physical button, which is mounted next to it. When the user pushes the button, another landscape appears on the display. This interaction routine contains two events: the pushing of the button and the changing of the images on the display. It is clear that the two events are related, since pushing the button causes the display to change its scenery. The successive pictures on the display feel digital because they are visible but intangible, can easily be replaced, and can disappear instantly. They form a digital event. The user's pushing of the button is a physical, tactile interaction. The button is pushed, and the user feels this movement. The physical button is persistent. Unlike the on-screen images, it will not suddenly vanish or change its appearance. The movement of the button forms a physical event. The relationship between both events is referred to as coupling. In this paper, we aim to further define our understanding of coupling, and its different aspects, against the background of embodied interaction. We complete this task by presenting a Research through Design (RtD) project [4] that we conducted in the area of industrial workplaces and Spatial Augmented Reality. From this model, we have inherited the idea that digital phenomena are represented in the physical world through two kinds of representations. These representations are those that feel physical, and those that feel digital. We called them physical events and digital events. The coupling between them-the green horizontal line on the diagram-is the coupling that we investigate in our research. From this model, we have inherited the idea that digital phenomena are represented in the physical world through two kinds of representations. These representations are those that feel physical, and those that feel digital. We called them physical events and digital events. The coupling between them-the green horizontal line on the diagram-is the coupling that we investigate in our research.

The Interaction Frogger Framework
Wensveen et al. [3] translated the concept of coupling into a framework for product designers. The interaction frogger framework was developed to make people's interaction with digital products natural and intuitive. The reference for natural interaction is the interaction with simple mechanical products (e.g., cutting paper with a pair of scissors). The framework focuses on the coupling between user action and product function. In an interaction routine, the two factors constantly alternate. A user action results in a product function, and vice versa. The coupling is the mutual relationship between the two factors. It describes how user action and product function are related to each other, and to what extent the user perceives them as similar. The framework suggests that when user action and product function are united on six aspects, they are experienced as naturally coupled. In the context of this paper, we retain three of these six aspects. These three aspects are: • Time: User action and product function coincide in time; • Location: User action and product function occur in the same location; • Direction: User action and product function have the same direction of movement.

The Aesthetics of Coupling
In our previous work, we deepened the concept of coupling by considering it as a source of aesthetic experience [1]. As described earlier in this paper, we distinguish physical and digital events. Whether these events are initiated by the user or the product, as emphasized by the Interaction Frogger Framework, is of secondary importance to us. We argued that the aesthetics of coupling uniquely play between two events: physical and digital events. The intrinsic difference in the nature of both events makes up the essence of the aesthetics. Physical events are persistent and static, while digital events are temporal and dynamic. The designer brings both events very close to one another, so that they are perceived by the user as one coherent, harmonious user experience. At the same time, the user is very aware of the different natures of the two events, which clearly makes a complete unification impossible. This inherent paradox, which is the tension field between being apart and together at the same time, causes feelings of surprise and alienation that constitute the aesthetics of coupling. The first author, together with Floor Van Schayik, designed a night lamp that illustrated this concept ( Figure 2). A video of this night lamp can be found in Supplementary Material: Video S1-NightLamp. When the user pulls up the sphere at the top of the lamp, the light inside the lamp moves with it and jumps into the sphere at the end of the movement. As such, the reading lamp becomes a night lamp. In this example, the aesthetics of coupling lie in the contrast between the physicality of the sphere and the intangibility of the light inside the lamp.
constitute the aesthetics of coupling. The first author, together with Floor Van Schayik, designed a night lamp that illustrated this concept ( Figure 2). A video of this night lamp can be found in Supplementary Material: Video S1-NightLamp. When the user pulls up the sphere at the top of the lamp, the light inside the lamp moves with it and jumps into the sphere at the end of the movement. As such, the reading lamp becomes a night lamp. In this example, the aesthetics of coupling lie in the contrast between the physicality of the sphere and the intangibility of the light inside the lamp.

The Audi Demonstrator
Demonstrator video can be seen in Supplementary Material: Video S2-AudiDemonstrator.

Background
We explored SAR in a human-cobot workplace [26]. A cobot, or collaborative robot, is a relatively small industrial robot designed for direct human-robot interaction. The context for our research project was provided by Audi Brussels (https://www.audibrussels.be, accessed on 17 May 2023) and Kuka (https://www.kuka.com, accessed on 17 May 2023). In the production line for the body panels of the Audi A1 in Brussels, Audi needed a workplace that could measure and inspect the adhesion quality of the fold glue joints for each panel (doors, hood and trunk). In this workplace, a single operator works together with a cobot: the Kuka LBR iiwa. This cobot tracks the contour of the body panel using contact force feedback, and checks the adhesion quality of the fold glue joints through means of an ultrasonic sensor.
We conducted a few preliminary ideations on the operator's interaction with the workplace and the cobot. We adopted the stance that interaction with projected content in workplaces is not going to replace today's interaction styles, but will coexist with and complement them [21]. This assumption resulted in a preliminary concept for the Audi workplace, defined in 3D CAD ( Figure 3). The concept consisted of a circular table and a control unit, containing a display-button setup. The Audi A1 body panels are placed manually on the table, one at a time. Above them, a Kuka LBR iiwa cobot hangs upside down from a horizontal bridge, with its work envelope covering the entire table surface. On either side of the cobot, two projectors are mounted on the bridge, augmenting the table and body panels with projected images. The idea is that the operator steers and controls the cobot with the control unit. The cobot moves over the body panel while touching its contour and ultrasonically senses the adhesion quality of the glue joints. Relying on several SAR studies [27][28][29], we decided that the visual feedback on this measurement should

The Audi Demonstrator
Demonstrator video can be seen in Supplementary Material: Video S2-AudiDemonstrator.

Background
We explored SAR in a human-cobot workplace [26]. A cobot, or collaborative robot, is a relatively small industrial robot designed for direct human-robot interaction. The context for our research project was provided by Audi Brussels (https://www.audibrussels.be, accessed on 17 May 2023) and Kuka (https://www.kuka.com, accessed on 17 May 2023). In the production line for the body panels of the Audi A1 in Brussels, Audi needed a workplace that could measure and inspect the adhesion quality of the fold glue joints for each panel (doors, hood and trunk). In this workplace, a single operator works together with a cobot: the Kuka LBR iiwa. This cobot tracks the contour of the body panel using contact force feedback, and checks the adhesion quality of the fold glue joints through means of an ultrasonic sensor.
We conducted a few preliminary ideations on the operator's interaction with the workplace and the cobot. We adopted the stance that interaction with projected content in workplaces is not going to replace today's interaction styles, but will coexist with and complement them [21]. This assumption resulted in a preliminary concept for the Audi workplace, defined in 3D CAD ( Figure 3). The concept consisted of a circular table and a control unit, containing a display-button setup. The Audi A1 body panels are placed manually on the table, one at a time. Above them, a Kuka LBR iiwa cobot hangs upside down from a horizontal bridge, with its work envelope covering the entire table surface. On either side of the cobot, two projectors are mounted on the bridge, augmenting the table and body panels with projected images. The idea is that the operator steers and controls the cobot with the control unit. The cobot moves over the body panel while touching its contour and ultrasonically senses the adhesion quality of the glue joints. Relying on several SAR studies [27][28][29], we decided that the visual feedback on this measurement should be projected in real time on the body panel itself, rather than appearing on a separate, isolated display. be projected in real time on the body panel itself, rather than appearing on a separate, isolated display.

Description of the Demonstrator
We designed and built a 1/6 scale model of the Audi workplace ( Figure 4). The purpose of this scale model was to inform and inspire people from the Flemish manufacturing industry. With this model, demonstrations are given by one operator to an audience of around ten people. The setup is made of sheet metal and SLS (Selective Laser Sintering) parts. It features a horizontal surface, on which the circular tabletop is mounted, and a vertical wall containing a mini beamer. Adjacent to the circular tabletop, a rectangular surface represents the control unit. The beamer simulates the control unit's multi-touch display and projects images onto the table surface and the objects that are placed on it: an adapted replica of the Audi A1 hood and two positioning supports. Finally, the setup includes an adapted scale model of the Kuka LBR iiwa, which is used by the operator as a puppet in a theatre.

Description of the Interaction
The interaction starts at the control unit. The operator stands in front of it and performs menu navigation actions on the multi-touch display. He/she selects the type of car, in this case a five-door Audi A1, and the body panel, in this case the hood ( Figure 5).

Description of the Demonstrator
We designed and built a 1/6 scale model of the Audi workplace ( Figure 4). The purpose of this scale model was to inform and inspire people from the Flemish manufacturing industry. With this model, demonstrations are given by one operator to an audience of around ten people. The setup is made of sheet metal and SLS (Selective Laser Sintering) parts. It features a horizontal surface, on which the circular tabletop is mounted, and a vertical wall containing a mini beamer. Adjacent to the circular tabletop, a rectangular surface represents the control unit. The beamer simulates the control unit's multi-touch display and projects images onto the table surface and the objects that are placed on it: an adapted replica of the Audi A1 hood and two positioning supports. Finally, the setup includes an adapted scale model of the Kuka LBR iiwa, which is used by the operator as a puppet in a theatre.
Designs 2023, 7, x FOR PEER REVIEW 6 of 23 be projected in real time on the body panel itself, rather than appearing on a separate, isolated display.

Description of the Demonstrator
We designed and built a 1/6 scale model of the Audi workplace ( Figure 4). The purpose of this scale model was to inform and inspire people from the Flemish manufacturing industry. With this model, demonstrations are given by one operator to an audience of around ten people. The setup is made of sheet metal and SLS (Selective Laser Sintering) parts. It features a horizontal surface, on which the circular tabletop is mounted, and a vertical wall containing a mini beamer. Adjacent to the circular tabletop, a rectangular surface represents the control unit. The beamer simulates the control unit's multi-touch display and projects images onto the table surface and the objects that are placed on it: an adapted replica of the Audi A1 hood and two positioning supports. Finally, the setup includes an adapted scale model of the Kuka LBR iiwa, which is used by the operator as a puppet in a theatre.

Description of the Interaction
The interaction starts at the control unit. The operator stands in front of it and performs menu navigation actions on the multi-touch display. He/she selects the type of car, in this case a five-door Audi A1, and the body panel, in this case the hood ( Figure 5).

Description of the Interaction
The interaction starts at the control unit. The operator stands in front of it and performs menu navigation actions on the multi-touch display. He/she selects the type of car, in this case a five-door Audi A1, and the body panel, in this case the hood ( Figure 5).  When the hood is selected, the hood icon moves upwards, leaves the display, and slides onto the augmented tabletop, where it grows into a full-scale positioning contour ( Figure 6). The operator moves along with this motion, leaves the control unit, and stands at the tabletop. From there on, he/she is guided through projected work instructions. The contours of both supports are shown in red ( Figure 7a). The operator positions both supports on the table (Figure 7b). When the system detects the correct placement of each support, the red pulsating projection turns white and adapts to the shape of the supports ( Figure 7c). Next, the outline of the hood is projected in red (Figure 8a). The operator places the hood on both supports, which help to ensure a correct positioning. Next, the operator presses a start button on the control unit, and the cobot begins to track the hood contour and measures the adhesion quality of the glue joints ( Figure 8b). The result of this measurement is projected in real time onto the hood itself: white dots indicate good adhesion quality, while red dots indicate poor quality (Figure 8c).   When the hood is selected, the hood icon moves upwards, leaves the display, and slides onto the augmented tabletop, where it grows into a full-scale positioning contour ( Figure 6). The operator moves along with this motion, leaves the control unit, and stands at the tabletop. From there on, he/she is guided through projected work instructions. The contours of both supports are shown in red ( Figure 7a). The operator positions both supports on the table (Figure 7b). When the system detects the correct placement of each support, the red pulsating projection turns white and adapts to the shape of the supports ( Figure 7c). Next, the outline of the hood is projected in red ( Figure 8a). The operator places the hood on both supports, which help to ensure a correct positioning. Next, the operator presses a start button on the control unit, and the cobot begins to track the hood contour and measures the adhesion quality of the glue joints ( Figure 8b). The result of this measurement is projected in real time onto the hood itself: white dots indicate good adhesion quality, while red dots indicate poor quality (Figure 8c). When the hood is selected, the hood icon moves upwards, leaves the display, and slides onto the augmented tabletop, where it grows into a full-scale positioning contour ( Figure 6). The operator moves along with this motion, leaves the control unit, and stands at the tabletop. From there on, he/she is guided through projected work instructions. The contours of both supports are shown in red ( Figure 7a). The operator positions both supports on the table (Figure 7b). When the system detects the correct placement of each support, the red pulsating projection turns white and adapts to the shape of the supports ( Figure 7c). Next, the outline of the hood is projected in red (Figure 8a). The operator places the hood on both supports, which help to ensure a correct positioning. Next, the operator presses a start button on the control unit, and the cobot begins to track the hood contour and measures the adhesion quality of the glue joints ( Figure 8b). The result of this measurement is projected in real time onto the hood itself: white dots indicate good adhesion quality, while red dots indicate poor quality (Figure 8c).   When the hood is selected, the hood icon moves upwards, leaves the display, and slides onto the augmented tabletop, where it grows into a full-scale positioning contour ( Figure 6). The operator moves along with this motion, leaves the control unit, and stands at the tabletop. From there on, he/she is guided through projected work instructions. The contours of both supports are shown in red ( Figure 7a). The operator positions both supports on the table (Figure 7b). When the system detects the correct placement of each support, the red pulsating projection turns white and adapts to the shape of the supports ( Figure 7c). Next, the outline of the hood is projected in red (Figure 8a). The operator places the hood on both supports, which help to ensure a correct positioning. Next, the operator presses a start button on the control unit, and the cobot begins to track the hood contour and measures the adhesion quality of the glue joints ( Figure 8b). The result of this measurement is projected in real time onto the hood itself: white dots indicate good adhesion quality, while red dots indicate poor quality (Figure 8c).

Discussion
The Audi demonstrator combines two interaction styles: interaction with a multitouch display, and interaction with a spatially augmented tabletop. We see novelty in the way the demonstrator couples these two interaction styles. We focus on two particular couplings, which are further developed in the demonstrators presented in the next two sections.

Coupling 1-On-screen Event and Projected Event
The demonstrator is divided into two distinct working zones: the control unit with a multi-touch display, and the tabletop with SAR ( Figure 4). This split-up determines the form semantics of the workplace. The square shape of the multi-touch display is adjacent to the circular shape of the SAR tabletop, forming a continuous surface. This continuity is further extended in the coupling of both on-screen and projected events. This coupling is clearly visible in the following moment. When the user selects the hood by touching the icon on the display, this icon moves upwards and slides onto the augmented tabletop ( Figure 6). At this point, the projection takes over the on-screen imagery. The transition between the on-screen event and the projected event leads the operator from the control unit to the tabletop. In the design process, we relied on the rules of the Interaction Frogger Framework [3]. Both events are coupled on the aspects of time, location, and direction ( Figure 9).

•
They happen one after the other, thus in the same time span; • Where the icon leaves the display, it is projected onto the tabletop; thus, both events have the same location; • The direction of the icon's movement on the display is the same as the direction of its movement on the tabletop. It is important to note that the three different couplings combine aspects that are already present in each event separately. Each event occurs at a particular time, on a particular location, and has a movement with a particular direction. Designing the coupling between them is clear and straightforward because the goal is clear: to ensure that the three aspects of both events are in line with each other. The benefit of these three couplings

Discussion
The Audi demonstrator combines two interaction styles: interaction with a multi-touch display, and interaction with a spatially augmented tabletop. We see novelty in the way the demonstrator couples these two interaction styles. We focus on two particular couplings, which are further developed in the demonstrators presented in the next two sections.

Coupling 1-On-Screen Event and Projected Event
The demonstrator is divided into two distinct working zones: the control unit with a multi-touch display, and the tabletop with SAR ( Figure 4). This split-up determines the form semantics of the workplace. The square shape of the multi-touch display is adjacent to the circular shape of the SAR tabletop, forming a continuous surface. This continuity is further extended in the coupling of both on-screen and projected events. This coupling is clearly visible in the following moment. When the user selects the hood by touching the icon on the display, this icon moves upwards and slides onto the augmented tabletop ( Figure 6). At this point, the projection takes over the on-screen imagery. The transition between the on-screen event and the projected event leads the operator from the control unit to the tabletop. In the design process, we relied on the rules of the Interaction Frogger Framework [3]. Both events are coupled on the aspects of time, location, and direction ( Figure 9).

Discussion
The Audi demonstrator combines two interaction styles: interaction with a multitouch display, and interaction with a spatially augmented tabletop. We see novelty in the way the demonstrator couples these two interaction styles. We focus on two particular couplings, which are further developed in the demonstrators presented in the next two sections.

Coupling 1-On-screen Event and Projected Event
The demonstrator is divided into two distinct working zones: the control unit with a multi-touch display, and the tabletop with SAR ( Figure 4). This split-up determines the form semantics of the workplace. The square shape of the multi-touch display is adjacent to the circular shape of the SAR tabletop, forming a continuous surface. This continuity is further extended in the coupling of both on-screen and projected events. This coupling is clearly visible in the following moment. When the user selects the hood by touching the icon on the display, this icon moves upwards and slides onto the augmented tabletop ( Figure 6). At this point, the projection takes over the on-screen imagery. The transition between the on-screen event and the projected event leads the operator from the control unit to the tabletop. In the design process, we relied on the rules of the Interaction Frogger Framework [3]. Both events are coupled on the aspects of time, location, and direction ( Figure 9).

•
They happen one after the other, thus in the same time span; • Where the icon leaves the display, it is projected onto the tabletop; thus, both events have the same location; • The direction of the icon's movement on the display is the same as the direction of its movement on the tabletop. It is important to note that the three different couplings combine aspects that are already present in each event separately. Each event occurs at a particular time, on a particular location, and has a movement with a particular direction. Designing the coupling between them is clear and straightforward because the goal is clear: to ensure that the three aspects of both events are in line with each other. The benefit of these three couplings

•
They happen one after the other, thus in the same time span; • Where the icon leaves the display, it is projected onto the tabletop; thus, both events have the same location; • The direction of the icon's movement on the display is the same as the direction of its movement on the tabletop.
It is important to note that the three different couplings combine aspects that are already present in each event separately. Each event occurs at a particular time, on a particular location, and has a movement with a particular direction. Designing the coupling between them is clear and straightforward because the goal is clear: to ensure that the three aspects of both events are in line with each other. The benefit of these three couplings lies in the domain of ease of use. Unity of time, location, and direction between different events makes the events resonate with each other. These couplings create uniformity, coherence, and order in different movements and actions, and promote the naturalness of interaction, as already implied through the Interaction Frogger Framework. They make the functioning of the workplace logical and, as such, easy to read. They reveal the meaning of the workplace to the operator. We call them couplings of meaning.
However, there is another factor involved. The transition between the two different interaction styles, i.e., multi-touch interaction on a display and in situ projection on an augmented tabletop, is emphasized through the movement of the hood icon. The icon literally leaves the display and the GUI paradigm and enters the real world, where it grows into a full-scale work instruction. This transformation is not already present in each event separately. It emerges from the coexistence and interplay of the three other couplings. Its merit lies in the fact that it adds expression and engagement to the interaction, as it touches on aesthetics and emotional values. In former work, we referred to this phenomenon as the aesthetics of coupling. In this paper, we want to deepen our thinking. We state that both events show unity on a fourth aspect, next to time, location, and direction. We call this aspect expression ( Figure 9). We call the coupling on this aspect coupling of expression.
We want to note that our definition of coupling of expression should not be confused with Wensveen's definition [3], in which the user expresses him-or herself during the interaction with a product. This expression is then reflected in the product's function. For example, cutting paper with a pair of scissors while feeling nervous and rushed will result in sloppy incisions.

Coupling 2: Physical Event and Projected Event
Another form of coupling occurs when the contours of both supports are projected onto the tabletop in real scale and all in white. At a certain point, the contour of the first support starts to pulsate slowly in red ( Figure 7a). This event nudges the operator to place the first support. Once the operator has completed this task (Figure 7b), the projection changes color from red to white and adapts to the shape of the support (Figure 7c). The projection highlights the support's identification number, as well as its peg hole, in which the hood will eventually be positioned. The placement of the support by the operator is a physical event that is coupled to a projected event. This coupling occurs on two aspects: time and location ( Figure 10). lies in the domain of ease of use. Unity of time, location, and direction between different events makes the events resonate with each other. These couplings create uniformity, coherence, and order in different movements and actions, and promote the naturalness of interaction, as already implied through the Interaction Frogger Framework. They make the functioning of the workplace logical and, as such, easy to read. They reveal the meaning of the workplace to the operator. We call them couplings of meaning.
However, there is another factor involved. The transition between the two different interaction styles, i.e., multi-touch interaction on a display and in situ projection on an augmented tabletop, is emphasized through the movement of the hood icon. The icon literally leaves the display and the GUI paradigm and enters the real world, where it grows into a full-scale work instruction. This transformation is not already present in each event separately. It emerges from the coexistence and interplay of the three other couplings. Its merit lies in the fact that it adds expression and engagement to the interaction, as it touches on aesthetics and emotional values. In former work, we referred to this phenomenon as the aesthetics of coupling. In this paper, we want to deepen our thinking. We state that both events show unity on a fourth aspect, next to time, location, and direction. We call this aspect expression ( Figure 9). We call the coupling on this aspect coupling of expression.
We want to note that our definition of coupling of expression should not be confused with Wensveen's definition [3], in which the user expresses him-or herself during the interaction with a product. This expression is then reflected in the product's function. For example, cutting paper with a pair of scissors while feeling nervous and rushed will result in sloppy incisions.

Coupling 2: Physical Event and Projected Event
Another form of coupling occurs when the contours of both supports are projected onto the tabletop in real scale and all in white. At a certain point, the contour of the first support starts to pulsate slowly in red (Figure 7a). This event nudges the operator to place the first support. Once the operator has completed this task (Figure 7b), the projection changes color from red to white and adapts to the shape of the support (Figure 7c). The projection highlights the support's identification number, as well as its peg hole, in which the hood will eventually be positioned. The placement of the support by the operator is a physical event that is coupled to a projected event. This coupling occurs on two aspects: time and location ( Figure 10).

•
The moment the operator positions the support on the tabletop, the projection appears; • The projection appears on the support itself, not next to or near it.

Learnings for the Next Iteration
After building the demonstrator, and presenting it to people from Audi, Kuka and the Flemish make industry, we came to the conclusion that the demonstrator indeed elicited the aspects of coupling mentioned above, but did not exploit their full potential. We saw three angles to further explore the design space. Firstly, we wanted to build a real scale setup to enhance the sense of reality and immersion. Secondly, we wanted to create

•
The moment the operator positions the support on the tabletop, the projection appears; • The projection appears on the support itself, not next to or near it.

Learnings for the Next Iteration
After building the demonstrator, and presenting it to people from Audi, Kuka and the Flemish make industry, we came to the conclusion that the demonstrator indeed elicited the aspects of coupling mentioned above, but did not exploit their full potential. We saw three angles to further explore the design space. Firstly, we wanted to build a real scale setup to enhance the sense of reality and immersion. Secondly, we wanted to create a real multi-touch display, rather than a simulated one, to increase the contrast between on-screen and projected events. Thirdly, we wanted to further explore coupling 2, i.e., the coupling between the supports and the projection. We decided to build a second installation, together with Kuka, in order to fulfil these intentions.

The Kuka Mockup
Demonstrator video can be seen in Supplementary Material: Video S3-KukaMockup.

Background
At this point in the research project, the plan rose to build a real demonstrator with a working cobot. In order to understand this idea, we decided to first construct a mockup out of wood, 3D printed components and spare parts. Since our focus was on interaction and coupling, rather than on force-based contour tracking and echolocation, we chose an artefact with a less complex 3D contour than the Audi A1 hood: a wooden longboard.

Description of the Mockup
The demonstrator was more an experience prototype or mockup than a full-blown demonstrator ( Figure 11). It consisted of two ladders on which we mounted an aluminum beam with two projectors, and a full-scale wooden mockup of the Kuka LBR iiwa cobot. We built a wooden tabletop and a control unit with a working multi-touch display, and placed it under the beam structure. We designed specific supports for the longboard, which could be placed on the tabletop. a real multi-touch display, rather than a simulated one, to increase the contrast between on-screen and projected events. Thirdly, we wanted to further explore coupling 2, i.e., the coupling between the supports and the projection. We decided to build a second installation, together with Kuka, in order to fulfil these intentions.

The Kuka Mockup
Demonstrator video can be seen in Supplementary Material: Video S3-KukaMockup.

Background
At this point in the research project, the plan rose to build a real demonstrator with a working cobot. In order to understand this idea, we decided to first construct a mockup out of wood, 3D printed components and spare parts. Since our focus was on interaction and coupling, rather than on force-based contour tracking and echolocation, we chose an artefact with a less complex 3D contour than the Audi A1 hood: a wooden longboard.

Description of the Mockup
The demonstrator was more an experience prototype or mockup than a full-blown demonstrator ( Figure 11). It consisted of two ladders on which we mounted an aluminum beam with two projectors, and a full-scale wooden mockup of the Kuka LBR iiwa cobot. We built a wooden tabletop and a control unit with a working multi-touch display, and placed it under the beam structure. We designed specific supports for the longboard, which could be placed on the tabletop.

Description of the Interaction
The interaction routine is very similar to that of the Audi demonstrator. The idea is that the cobot measures the contour of a longboard instead of the Audi A1 hood and compares it with a reference contour. Another difference lies in the supports. Both longboard supports contain a dedicated projection surface, which is in the form of a pill-shaped cavity, printed on cardboard. Once the support is placed on the work surface, an identically shaped icon is projected into this cavity (Figures 12 and 13). The projected icon contains a slider that moves from one side of the printed cavity to the other, naturally guided via its boundary, informing the operator that the system has locked the support to the work surface ( Figure 14). When the cobot has completed its measurement task, the zones with form deviations are indicated by a red light (Figure 15b), and the operator marks the zones with physical stickers (Figure 15c).

Description of the Interaction
The interaction routine is very similar to that of the Audi demonstrator. The idea is that the cobot measures the contour of a longboard instead of the Audi A1 hood and compares it with a reference contour. Another difference lies in the supports. Both longboard supports contain a dedicated projection surface, which is in the form of a pill-shaped cavity, printed on cardboard. Once the support is placed on the work surface, an identically shaped icon is projected into this cavity (Figures 12 and 13). The projected icon contains a slider that moves from one side of the printed cavity to the other, naturally guided via its boundary, informing the operator that the system has locked the support to the work surface ( Figure 14). When the cobot has completed its measurement task, the zones with form deviations are indicated by a red light (Figure 15b

Discussion
During the design of this demonstrator, we developed the two coupling themes that we had begun to explore in the Audi demonstrator.

Coupling 1-On-Screen Event and Projected Event
The transition from on-screen to projected event is realized literally through the presence of an operational multi-touch display on the control unit. The coupling between the two events is now a transition between two different media, which reinforces the transformative aspect of the interaction routine and, thus, the expressive power of the coupling. As soon as the icon disappears from the top of the display, it reappears as a projection on the tabletop, continuing the same movement ( Figure 12). This process creates the effect of

Discussion
During the design of this demonstrator, we developed the two coupling themes that we had begun to explore in the Audi demonstrator.

Coupling 1-On-Screen Event and Projected Event
The transition from on-screen to projected event is realized literally through the presence of an operational multi-touch display on the control unit. The coupling between the two events is now a transition between two different media, which reinforces the transformative aspect of the interaction routine and, thus, the expressive power of the coupling. As soon as the icon disappears from the top of the display, it reappears as a projection on the tabletop, continuing the same movement ( Figure 12). This process creates the effect of

Discussion
During the design of this demonstrator, we developed the two coupling themes that we had begun to explore in the Audi demonstrator.

Coupling 1-On-Screen Event and Projected Event
The transition from on-screen to projected event is realized literally through the presence of an operational multi-touch display on the control unit. The coupling between the two events is now a transition between two different media, which reinforces the transformative aspect of the interaction routine and, thus, the expressive power of the coupling. As soon as the icon disappears from the top of the display, it reappears as a projection on the tabletop, continuing the same movement ( Figure 12). This process creates the effect of

Discussion
During the design of this demonstrator, we developed the two coupling themes that we had begun to explore in the Audi demonstrator.

Coupling 1-On-Screen Event and Projected Event
The transition from on-screen to projected event is realized literally through the presence of an operational multi-touch display on the control unit. The coupling between the two events is now a transition between two different media, which reinforces the transformative aspect of the interaction routine and, thus, the expressive power of the coupling. As soon as the icon disappears from the top of the display, it reappears as a projection on the tabletop, continuing the same movement ( Figure 12). This process creates the effect of

Discussion
During the design of this demonstrator, we developed the two coupling themes that we had begun to explore in the Audi demonstrator.

Coupling 1-On-Screen Event and Projected Event
The transition from on-screen to projected event is realized literally through the presence of an operational multi-touch display on the control unit. The coupling between the two events is now a transition between two different media, which reinforces the transformative aspect of the interaction routine and, thus, the expressive power of the coupling. As soon as the icon disappears from the top of the display, it reappears as a projection on the tabletop, continuing the same movement ( Figure 12). This process creates the effect of the icon literally crawling out of the display and into the real world. The sense of magic and surprise [3] that this process creates is the result of coupling of expression ( Figure 9). We want to emphasize that, in our previous work, we stated that this form of aesthetic was only possible in the coupling between physical and digital events. In this demonstrator, both coupled events are digital in nature.

Coupling 2-Physical Event and Projected Event
We further explored Coupling 2 from the Audi demonstrator. The result of this exploration is visible in the placing of the longboard supports on the work surface ( Figure 13). The manual placement of the support causes the appearance and movement of a projected element in the support's cardboard cavity, reflecting the status change in the support from unlocked to locked (Figure 14). With respect to the Audi demonstrator, we added coupling of direction to the concept, as the projected element follows the physical contour of the cavity. As a result of this design intervention, something remarkable happens: the support, which is a physical and inanimate object, suddenly has a moving part and seems to be brought to life through the projection. We consider this event to be coupling of expression ( Figure 16). the icon literally crawling out of the display and into the real world. The sense of magic and surprise [3] that this process creates is the result of coupling of expression ( Figure 9). We want to emphasize that, in our previous work, we stated that this form of aesthetic was only possible in the coupling between physical and digital events. In this demonstrator, both coupled events are digital in nature.

Coupling 2-Physical Event and Projected Event
We further explored Coupling 2 from the Audi demonstrator. The result of this exploration is visible in the placing of the longboard supports on the work surface ( Figure  13). The manual placement of the support causes the appearance and movement of a projected element in the support's cardboard cavity, reflecting the status change in the support from unlocked to locked (Figure 14). With respect to the Audi demonstrator, we added coupling of direction to the concept, as the projected element follows the physical contour of the cavity. As a result of this design intervention, something remarkable happens: the support, which is a physical and inanimate object, suddenly has a moving part and seems to be brought to life through the projection. We consider this event to be coupling of expression ( Figure 16).

Learnings for the Next Iteration
As the cobot in the Kuka mockup was just a static wooden dummy, its movement capabilities remained underexposed, as do its coupling possibilities with other events. In our final demonstrator, we wanted to include a real, working cobot.

The Kuka Demonstrator
Demonstrator video can be seen in Supplementary Material: Video S4-KukaDemonstrator.

Background
Together with the people from Kuka, we designed and built a workplace around a limited set of cobot tasks, the most important of which was to measure the contour of a longboard deck and compare it to a reference contour. The envisaged workplace would contain a real Kuka LBR iiwa, real force-based contour tracking, and real-time projection of the measurement results onto the longboard itself.

Description of the Demonstrator
The demonstrator features a horizontal bridge of approximately 3m high ( Figure 17). Beneath the bridge is a workbench containing two different work zones: a control unit with a display-button setup and a horizontal work surface. On the work surface, a longboard can be positioned and mounted by the operator using two supports. Above it, a Kuka LBR iiwa cobot hangs upside down from the bridge, with its movement envelope covering the entire work surface. On opposite sides of the cobot, two projectors are mounted on the bridge. We designed a special tool, which is mounted on the cobot itself, that allows it to physically touch and track the contour of the longboard, thereby sensing and processing the applied force. The two work zones involve different operator tasks.

Learnings for the Next Iteration
As the cobot in the Kuka mockup was just a static wooden dummy, its movement capabilities remained underexposed, as do its coupling possibilities with other events. In our final demonstrator, we wanted to include a real, working cobot.

The Kuka Demonstrator
Demonstrator video can be seen in Supplementary Material: Video S4-KukaDemonstrator.

Background
Together with the people from Kuka, we designed and built a workplace around a limited set of cobot tasks, the most important of which was to measure the contour of a longboard deck and compare it to a reference contour. The envisaged workplace would contain a real Kuka LBR iiwa, real force-based contour tracking, and real-time projection of the measurement results onto the longboard itself.

Description of the Demonstrator
The demonstrator features a horizontal bridge of approximately 3m high ( Figure 17). Beneath the bridge is a workbench containing two different work zones: a control unit with a display-button setup and a horizontal work surface. On the work surface, a longboard can be positioned and mounted by the operator using two supports. Above it, a Kuka LBR iiwa cobot hangs upside down from the bridge, with its movement envelope covering the entire work surface. On opposite sides of the cobot, two projectors are mounted on the bridge. We designed a special tool, which is mounted on the cobot itself, that allows it to physically touch and track the contour of the longboard, thereby sensing and processing the applied force. The two work zones involve different operator tasks. The zone with the control unit serves to select tasks through menu navigation. The work surface with the cobot and the projection is conceived to perform physical tasks, in cooperation with the cobot. Both work zones are connected via the large OK button below the control unit. This large button is located between the two different work zones and always remains accessible, whether the operator is working at the control unit or the work surface. The zone with the control unit serves to select tasks through menu navigation. The work surface with the cobot and the projection is conceived to perform physical tasks, in cooperation with the cobot. Both work zones are connected via the large OK button below the control unit. This large button is located between the two different work zones and always remains accessible, whether the operator is working at the control unit or the work surface.

Interaction with the Control Unit
In a first phase, the cobot is in sleep mode (Figure 18a). We provided a box, attached to the bridge, into which the cobot can retreat, portraying a clear image of being at rest. The operator walks to the control unit and activates the system by pushing the slider button on the control unit to the left (Figure 18b). On-screen, a black curtain slides away together with the button, and the control unit is activated. As a result, the cobot above the work surface wakes up and moves towards the control unit. It adopts an attentive posture, as it seems to be looking at the display, together with the operator. We call this dialogue mode (Figure 19a). The operator navigates through the menus using a traditional rotary dial and push button interface ( Figure 19b). As he/she turns the dial, the menus move horizontally.

Interaction with the Control Unit
In a first phase, the cobot is in sleep mode (Figure 18a). We provided a box, attached to the bridge, into which the cobot can retreat, portraying a clear image of being at rest. The operator walks to the control unit and activates the system by pushing the slider button on the control unit to the left (Figure 18b). On-screen, a black curtain slides away together with the button, and the control unit is activated. As a result, the cobot above the work surface wakes up and moves towards the control unit. It adopts an attentive posture, as it seems to be looking at the display, together with the operator. We call this dialogue mode (Figure 19a). The operator navigates through the menus using a traditional rotary dial and push button interface ( Figure 19b). As he/she turns the dial, the menus move horizontally. The zone with the control unit serves to select tasks through menu navigation. The work surface with the cobot and the projection is conceived to perform physical tasks, in cooperation with the cobot. Both work zones are connected via the large OK button below the control unit. This large button is located between the two different work zones and always remains accessible, whether the operator is working at the control unit or the work surface.

Interaction with the Control Unit
In a first phase, the cobot is in sleep mode (Figure 18a). We provided a box, attached to the bridge, into which the cobot can retreat, portraying a clear image of being at rest. The operator walks to the control unit and activates the system by pushing the slider button on the control unit to the left (Figure 18b). On-screen, a black curtain slides away together with the button, and the control unit is activated. As a result, the cobot above the work surface wakes up and moves towards the control unit. It adopts an attentive posture, as it seems to be looking at the display, together with the operator. We call this dialogue mode (Figure 19a). The operator navigates through the menus using a traditional rotary dial and push button interface ( Figure 19b). As he/she turns the dial, the menus move horizontally.

Transition between two Work Zones
The operator selects to perform a contour tracking task, and confirms this selection by pushing the rotary dial (Figure 20a). The cobot then moves away from the control unit towards the work surface. At this point, the on-screen images on the control unit's display move downwards, as if they flow onto the table below. At the same time, a projection is generated on the work surface, showing a sliding image that moves away from the control unit and fills the entire work surface. The cobot appears to "pull" the on-screen image out of the control unit onto the tabletop (Figure 20b). The operator is guided from one work zone towards the other based on the physical movements of the cobot and the movements of on-screen and projected images. The cobot is now looking at the work surface, and is in standby mode. Work instructions are projected onto the work surface ( Figure 21).

Transition between Two Work Zones
The operator selects to perform a contour tracking task, and confirms this selection by pushing the rotary dial (Figure 20a). The cobot then moves away from the control unit towards the work surface. At this point, the on-screen images on the control unit's display move downwards, as if they flow onto the table below. At the same time, a projection is generated on the work surface, showing a sliding image that moves away from the control unit and fills the entire work surface. The cobot appears to "pull" the on-screen image out of the control unit onto the tabletop (Figure 20b). The operator is guided from one work zone towards the other based on the physical movements of the cobot and the movements of on-screen and projected images. The cobot is now looking at the work surface, and is in standby mode. Work instructions are projected onto the work surface ( Figure 21).

Transition between two Work Zones
The operator selects to perform a contour tracking task, and confirms this selection by pushing the rotary dial (Figure 20a). The cobot then moves away from the control unit towards the work surface. At this point, the on-screen images on the control unit's display move downwards, as if they flow onto the table below. At the same time, a projection is generated on the work surface, showing a sliding image that moves away from the control unit and fills the entire work surface. The cobot appears to "pull" the on-screen image out of the control unit onto the tabletop (Figure 20b). The operator is guided from one work zone towards the other based on the physical movements of the cobot and the movements of on-screen and projected images. The cobot is now looking at the work surface, and is in standby mode. Work instructions are projected onto the work surface ( Figure 21).

Transition between two Work Zones
The operator selects to perform a contour tracking task, and confirms this selection by pushing the rotary dial (Figure 20a). The cobot then moves away from the control unit towards the work surface. At this point, the on-screen images on the control unit's display move downwards, as if they flow onto the table below. At the same time, a projection is generated on the work surface, showing a sliding image that moves away from the control unit and fills the entire work surface. The cobot appears to "pull" the on-screen image out of the control unit onto the tabletop (Figure 20b). The operator is guided from one work zone towards the other based on the physical movements of the cobot and the movements of on-screen and projected images. The cobot is now looking at the work surface, and is in standby mode. Work instructions are projected onto the work surface ( Figure 21).

Manual Mounting of the Supports and the Longboard
The operator follows the instructions on the work surface and mounts the supports (Figure 22a). After each support is mounted, the operator pushes the OK button (Figure 22b), and a moving icon is projected into a cavity on each support, indicating that the system has locked the support (Figure 23). The operator then places the longboard on the supports. This action is detected via the system, which responds with a projection on the longboard itself ( Figure 24). The operator continues to follow the work instructions and manually bolts the longboard in place. When this task is completed, he/she pushes the OK button. The cobot approaches the longboard.

Manual Mounting of the Supports and the Longboard
The operator follows the instructions on the work surface and mounts the supports (Figure 22a). After each support is mounted, the operator pushes the OK button (Figure  22b), and a moving icon is projected into a cavity on each support, indicating that the system has locked the support (Figure 23). The operator then places the longboard on the supports. This action is detected via the system, which responds with a projection on the longboard itself ( Figure 24). The operator continues to follow the work instructions and manually bolts the longboard in place. When this task is completed, he/she pushes the OK button. The cobot approaches the longboard.

Force-Based Contour Tracking
The operator pushes the OK button, and the cobot begins to track the contour of the longboard. The tracked path is projected onto the longboard in real time (Figure 25a). The

Manual Mounting of the Supports and the Longboard
The operator follows the instructions on the work surface and mounts the supports (Figure 22a). After each support is mounted, the operator pushes the OK button (Figure  22b), and a moving icon is projected into a cavity on each support, indicating that the system has locked the support (Figure 23). The operator then places the longboard on the supports. This action is detected via the system, which responds with a projection on the longboard itself ( Figure 24). The operator continues to follow the work instructions and manually bolts the longboard in place. When this task is completed, he/she pushes the OK button. The cobot approaches the longboard.

Force-Based Contour Tracking
The operator pushes the OK button, and the cobot begins to track the contour of the longboard. The tracked path is projected onto the longboard in real time (Figure 25a). The

Manual Mounting of the Supports and the Longboard
The operator follows the instructions on the work surface and mounts the supports (Figure 22a). After each support is mounted, the operator pushes the OK button (Figure  22b), and a moving icon is projected into a cavity on each support, indicating that the system has locked the support (Figure 23). The operator then places the longboard on the supports. This action is detected via the system, which responds with a projection on the longboard itself ( Figure 24). The operator continues to follow the work instructions and manually bolts the longboard in place. When this task is completed, he/she pushes the OK button. The cobot approaches the longboard.

Force-Based Contour Tracking
The operator pushes the OK button, and the cobot begins to track the contour of the longboard. The tracked path is projected onto the longboard in real time (Figure 25a). The Figure 24. Operator places longboard on supports, and system reacts with a projection directly on longboard.

Force-Based Contour Tracking
The operator pushes the OK button, and the cobot begins to track the contour of the longboard. The tracked path is projected onto the longboard in real time (Figure 25a). The cobot is now in scan mode. In a first pass, the cobot recognizes the longboard contour and uses it as a reference for other longboards. When a longboard with a deviating contour is checked via the system (in the video, the deviation is added by the operator), the deviation is detected and marked with a red projection image on the longboard itself (Figure 25b). Correspondingly, the deviation value is projected on the work surface. After tracking, the operator pushes the OK button. cobot is now in scan mode. In a first pass, the cobot recognizes the longboard contour and uses it as a reference for other longboards. When a longboard with a deviating contour is checked via the system (in the video, the deviation is added by the operator), the deviation is detected and marked with a red projection image on the longboard itself (Figure 25b). Correspondingly, the deviation value is projected on the work surface. After tracking, the operator pushes the OK button.

Clearing the Work Surface and Shutting down the System
The operator removes the longboard and the supports, following the instructions on the work surface. When the work surface is empty, the OK button is pushed, and the cobot guides the projected images into the control unit with a physical movement (Figure 26a). The control unit is reactivated, and the cobot is back in dialogue mode. To deactivate the system, the operator moves the slider on top of the control unit to the right, and the onscreen image reacts accordingly (Figure 26b). The cobot returns to sleep mode ( Figure  18a).

Discussion
The fact that we had the possibility to conceive and craft a semi-functional demonstrator on real scale, with a fully-functioning cobot, gave us the chance to further explore Coupling 1, which had already appeared in the two earlier demonstrators. Coupling 2 was refined, though its concept remained the same. In addition, we added a display-button setup as control unit, with specific couplings.

Clearing the Work Surface and Shutting down the System
The operator removes the longboard and the supports, following the instructions on the work surface. When the work surface is empty, the OK button is pushed, and the cobot guides the projected images into the control unit with a physical movement (Figure 26a). The control unit is reactivated, and the cobot is back in dialogue mode. To deactivate the system, the operator moves the slider on top of the control unit to the right, and the on-screen image reacts accordingly (Figure 26b). The cobot returns to sleep mode (Figure 18a). cobot is now in scan mode. In a first pass, the cobot recognizes the longboard contour and uses it as a reference for other longboards. When a longboard with a deviating contour is checked via the system (in the video, the deviation is added by the operator), the deviation is detected and marked with a red projection image on the longboard itself (Figure 25b). Correspondingly, the deviation value is projected on the work surface. After tracking, the operator pushes the OK button.

Clearing the Work Surface and Shutting down the System
The operator removes the longboard and the supports, following the instructions on the work surface. When the work surface is empty, the OK button is pushed, and the cobot guides the projected images into the control unit with a physical movement (Figure 26a). The control unit is reactivated, and the cobot is back in dialogue mode. To deactivate the system, the operator moves the slider on top of the control unit to the right, and the onscreen image reacts accordingly (Figure 26b). The cobot returns to sleep mode ( Figure  18a).

Discussion
The fact that we had the possibility to conceive and craft a semi-functional demonstrator on real scale, with a fully-functioning cobot, gave us the chance to further explore Coupling 1, which had already appeared in the two earlier demonstrators. Coupling 2 was refined, though its concept remained the same. In addition, we added a display-button setup as control unit, with specific couplings.

Discussion
The fact that we had the possibility to conceive and craft a semi-functional demonstrator on real scale, with a fully-functioning cobot, gave us the chance to further explore Coupling 1, which had already appeared in the two earlier demonstrators. Coupling 2 was refined, though its concept remained the same. In addition, we added a display-button setup as control unit, with specific couplings.

Coupling 1: On-Screen, Projected and Cobot Event
We considered the movements and postures of the cobot not only as a functional given, but also as a crucial part of the workplace's form semantics and affordances. We realized this intention by relying on the concept op Mode of Use Reflected in the Physical State (MURPS) [30]. This means that we designed the different postures of the cobot in such a way that they non-verbally express the state of the workplace's operating system. In this respect, we distinguish between sleep mode (Figure 18a), dialogue mode (Figure 19a), standby mode (Figure 21), and scan mode (Figure 25a). Moreover, the movements of the cobot are coupled to projected events. This coupling is most prominent in scan mode (Figure 25a), where the cobot tracks the contour of the longboard, and the result of the measurement is projected onto the longboard's surface.
We further explored Coupling 1 by enriching it with cobot movements. This approach led to a new insight. When the operator ends the dialogue mode by pushing the rotary dial (Figure 20a), three events are coupled: a cobot event, an on-screen event, and a projected event ( Figure 27). The three events occur at the same time and at the same location, and share the same speed and direction. In addition, the couplings create the impression that the cobot is actually pulling the image out of the display and spreading it onto the tabletop (Figure 20b). A similar coupling occurs at the end of the contour tracking task. The projected images slide back from the work surface in the control unit (Figure 26a), seemingly being pushed by the cobot. This effect, where the cobot appears to pull images out of the display onto the tabletop and back, surpasses the three couplings of meaning and reinforces the coupling of expression. The cobot becomes an expressive medium that grasps intangible elements and moves them from one physical place to another. We considered the movements and postures of the cobot not only as a functional given, but also as a crucial part of the workplace's form semantics and affordances. We realized this intention by relying on the concept op Mode of Use Reflected in the Physical State (MURPS) [30]. This means that we designed the different postures of the cobot in such a way that they non-verbally express the state of the workplace's operating system. In this respect, we distinguish between sleep mode (Figure 18a), dialogue mode ( Figure  19a), standby mode (Figure 21), and scan mode (Figure 25a). Moreover, the movements of the cobot are coupled to projected events. This coupling is most prominent in scan mode (Figure 25a), where the cobot tracks the contour of the longboard, and the result of the measurement is projected onto the longboard's surface.
We further explored Coupling 1 by enriching it with cobot movements. This approach led to a new insight. When the operator ends the dialogue mode by pushing the rotary dial (Figure 20a), three events are coupled: a cobot event, an on-screen event, and a projected event ( Figure 27). The three events occur at the same time and at the same location, and share the same speed and direction. In addition, the couplings create the impression that the cobot is actually pulling the image out of the display and spreading it onto the tabletop (Figure 20b). A similar coupling occurs at the end of the contour tracking task. The projected images slide back from the work surface in the control unit ( Figure  26a), seemingly being pushed by the cobot. This effect, where the cobot appears to pull images out of the display onto the tabletop and back, surpasses the three couplings of meaning and reinforces the coupling of expression. The cobot becomes an expressive medium that grasps intangible elements and moves them from one physical place to another.

Additional Couplings of the Control Unit: Physical Event and On-Screen Event
What coupling of expression entails is clearly illustrated by the difference between two interaction routines on the control unit of the Kuka demonstrator. The first meaning is the turning of the rotary dial, which causes the different task icons on the display to move sideways, either to the left or to the right (Figure 19b). The rotary dial is positioned near the display with the task icons, its rotation is immediately translated into a lateral movement of the icons, and the icons move left or right, according to its direction of rotation. As such, there is coupling on the aspects of time, location, and direction ( Figure 28a). However, the interaction feels rather plain and barely expressive. This outcome is because the coupling of direction is not very strong: the dial rotates, while the icons translate. The fact that a rotary dial allows this degree of randomness makes it a popular control element in many commercial GUIs, but it also makes its interaction standardized and generic. This method is very different to the second routine. This routine involves pushing the slider button at the top of the display, which causes a black curtain to slide across the display,

Additional Couplings of the Control Unit: Physical Event and On-Screen Event
What coupling of expression entails is clearly illustrated by the difference between two interaction routines on the control unit of the Kuka demonstrator. The first meaning is the turning of the rotary dial, which causes the different task icons on the display to move sideways, either to the left or to the right (Figure 19b). The rotary dial is positioned near the display with the task icons, its rotation is immediately translated into a lateral movement of the icons, and the icons move left or right, according to its direction of rotation. As such, there is coupling on the aspects of time, location, and direction ( Figure 28a). However, the interaction feels rather plain and barely expressive. This outcome is because the coupling of direction is not very strong: the dial rotates, while the icons translate. The fact that a rotary dial allows this degree of randomness makes it a popular control element in many commercial GUIs, but it also makes its interaction standardized and generic. This method is very different to the second routine. This routine involves pushing the slider button at the top of the display, which causes a black curtain to slide across the display, activating or deactivating it (Figures 18b and 26b). Again, there is coupling on the aspects of time, location, and direction. In this case, however, the coupling of direction is strong. Both the slider button and the on-screen curtain translate in the same direction, over almost the same distance. For the user, it feels as if the slider button is directly attached to the on-screen curtain, as if he/she were dragging a physical curtain across the display. The specific design and coupling of the two events, the pushing of the slider button, and the movement of the on-screen curtain results in coupling of expression (Figure 28b).  (Figures 18b and 26b). Again, there is coupling on the aspects of time, location, and direction. In this case, however, the coupling of direction is strong. Both the slider button and the on-screen curtain translate in the same direction, over almost the same distance. For the user, it feels as if the slider button is directly attached to the on-screen curtain, as if he/she were dragging a physical curtain across the display. The specific design and coupling of the two events, the pushing of the slider button, and the movement of the on-screen curtain results in coupling of expression (Figure 28b).

Discussion
In this section, we look back at the delivered work. We discuss the couplings that emerged across the three demonstrators, and reflect on how they have refined our understanding of coupling.

A Wealth of Events
In our previous work, we assumed two types of events-physical events and digital events-with coupling being the relationship between them. During the conception and creation of the three demonstrators, it became clear that this division was not sufficiently fine-grained. We encountered two types of digital events: on-screen events and projected events. It is to be expected that, as different forms of AR (tablet-based AR, SAR, headmounted display-based AR) are adopted in workplaces, the number of digital event types will increase. In this scenario, we are thinking of holographic events, sound events, etc. Physical events can also be categorized in more detail. During our research, we already came across the cobot event; however, a further classification of physical events urges itself. Event classifiers include user movements, physical events performed with control elements, actuated events, etc. Digital events can be coupled to physical events, but also to other digital events. This fact clearly surfaced in our RtD process: in the three demonstrators, we defined couplings between on-screen and projected events, which were both digital in nature. Similarly, couplings between physical events are already commonplace. Any kitchen appliance that connects the pushing of a button to the activation of an electric motor couples two physical events.
In addition, coupling is not necessarily limited to two events. In the Kuka demonstrator, we coupled three events instead of two: an on-screen event, a projected event, and a cobot event. Four or five events may also be coupled together. Whether these events are physical or digital in nature is less important.
As digital technology evolves, and as digital phenomena break free from displays and enter the physical world in projected or holographic form, the dichotomy between the digital and the physical, which has been the blueprint for embodied interaction to date, becomes less dominant [31]. We want to open the door to an understanding of coupling that transcends the traditional bridging of the physical and the digital. Designers of digital products and systems need to be primarily concerned with the intuitive and engaging coupling of different events, regardless of their nature, rather than making interaction with digital phenomena more physical.

Discussion
In this section, we look back at the delivered work. We discuss the couplings that emerged across the three demonstrators, and reflect on how they have refined our understanding of coupling.

A Wealth of Events
In our previous work, we assumed two types of events-physical events and digital events-with coupling being the relationship between them. During the conception and creation of the three demonstrators, it became clear that this division was not sufficiently fine-grained. We encountered two types of digital events: on-screen events and projected events. It is to be expected that, as different forms of AR (tablet-based AR, SAR, headmounted display-based AR) are adopted in workplaces, the number of digital event types will increase. In this scenario, we are thinking of holographic events, sound events, etc. Physical events can also be categorized in more detail. During our research, we already came across the cobot event; however, a further classification of physical events urges itself. Event classifiers include user movements, physical events performed with control elements, actuated events, etc. Digital events can be coupled to physical events, but also to other digital events. This fact clearly surfaced in our RtD process: in the three demonstrators, we defined couplings between on-screen and projected events, which were both digital in nature. Similarly, couplings between physical events are already commonplace. Any kitchen appliance that connects the pushing of a button to the activation of an electric motor couples two physical events.
In addition, coupling is not necessarily limited to two events. In the Kuka demonstrator, we coupled three events instead of two: an on-screen event, a projected event, and a cobot event. Four or five events may also be coupled together. Whether these events are physical or digital in nature is less important.
As digital technology evolves, and as digital phenomena break free from displays and enter the physical world in projected or holographic form, the dichotomy between the digital and the physical, which has been the blueprint for embodied interaction to date, becomes less dominant [31]. We want to open the door to an understanding of coupling that transcends the traditional bridging of the physical and the digital. Designers of digital products and systems need to be primarily concerned with the intuitive and engaging coupling of different events, regardless of their nature, rather than making interaction with digital phenomena more physical.

From Meaning to Expression
We defined four aspects of coupling. Three of them came directly from the Interaction Frogger Framework: time, location, and direction. We called them couplings of meaning. In the discussion about the Audi demonstrator, we stated that they are relatively easy for designers to understand and employ, as they only require the organization and alignment of aspects that are already present in each event separately.
The fourth aspect of coupling-expression-is more difficult to grasp, because it is not already present in each individual event. Instead, it emerges as a result of the other couplings and the design of the workplace in general. Coupling of expression stems from the aesthetics of coupling, which we discussed in Section 2.3. However, the aesthetics of coupling, as well as the MCRpd interaction model, relied on the dichotomy between the physical and the digital realms. If this dichotomy fades into the background, then how can we define coupling of expression and its relation to coupling of meaning?
Coupling of expression emerges as a consequence of, and as a contrast to, coupling of meaning [32]. In order to set the stage on which coupling of expression can perform, at least some realization of couplings of meaning is necessary. Couplings of meaning form, as it were, a reference, i.e., a background against which coupling of expression is perceived and felt by the operator. This stage or background has an orderly, logical, and natural character, as the various coupled events resonate with familiar, often Newtonian laws and common sense knowledge about the physical world [33]. However, when coupling of expression appears on the stage, this familiar character is challenged and transcended, as the coupling appears to violate the established laws. The resulting user experience enters the realm of magic, beauty, and surprise, and appeals to the operator's emotions, rather than to his/her reasoning. We give three examples:

•
In the Kuka mockup, the movement of the on-screen longboard icon is adopted using the projected longboard icon; thus, it appears to leave the display and slide across the tabletop ( Figure 12). Both icons appear to be one, which is an effect created via the couplings of time, location, and direction. The expressive appeal of the interaction routine lies in the fact that the operator clearly realizes that the two icons are not the same. They have inherently different aspects. In the Kuka demonstrator, this movement is reinforced through the cobot movements. The effect is that the cobot pulls the on-screen content out of the display and spreads it across the tabletop (Figure 20b).

•
In the Kuka mockup and the Kuka demonstrator, the pill-shaped cavity of the support is filled with a projected slider (Figures 14 and 23). Couplings of time, location, and direction ensure that the slider carefully follows the contours of the physical cavity, as if it were a real slider. The expression comes into play when the operator realizes that the slider is not real.

•
In the Kuka demonstrator, pushing the slider button on the control unit causes the on-screen curtain to slide simultaneously in the same direction at almost the same pace (Figures 18b and 26b). For the operator, it feels as if the curtain is physically attached to the slider, although he/she clearly sees that the curtain is only an on-screen representation, rather than a real one.
Why should designers bother to create couplings of expression? Apparently, unlike couplings of meaning, coupling of expression does not contribute to the ease of use or intuitive readability of an augmented workplace. Therefore, what is the point? We believe that coupling of expression enhances the quality perception of the workplace. It creates an interaction that is aesthetically pleasing, harmonious, and engaging for the operator, and as such heightens his/her appreciation of the workplace [34]. In other words, coupling of expression serves the operator's emotional well-being.
We see a parallel between our concept of coupling and Hassenzahl's theory of User Experience, which is specifically aimed at digital products and systems [35]. Couplings of meaning generate what Hassenzahl calls pragmatic quality. They concern the utility and usability of a digital product, and describe how well the product fulfils a particular function or completes a particular task. Coupling of expression generates hedonic quality [36], and determines how the operator feels when he/she performs a task in the workplace. The most radical aspect of Hassenzahl's theory, in our view, is the relationship between pragmatic and hedonic quality. Pragmatic quality, according to Hassenzahl, is never a goal in itself. It should be considered as an enabler of hedonic quality. The fact that a digital product fulfils the task for which it was designed is taken for granted by the user, and does not contribute to his/her well-being. A product that aims to provide pleasure and engagement should have hedonic quality, and its pragmatic quality is subordinate to this. This relationship between pragmatic and hedonic quality corresponds to how we position couplings of meaning in relation to coupling of expression. The former couplings are merely enablers of the latter type. The ultimate goal of the designer should be to design an augmented workplace in which the operator feels good and thrives. Coupling of expression is directly related to this goal. Couplings of meaning allow coupling of expression to flourish.

Conclusions
We started our investigation by formulating the following research question: what coupling possibilities emerge when a strong specific workplace is enriched with SAR? Our goal was to design a workplace as one holistic, integrated entity, combining physical and SAR components. During the conception and crafting of three demonstrators, several themes within the embodied interaction research agenda were addressed and explored.
Firstly, we believe that the traditional dichotomy between the physical and the digital is becoming less prominent as a driver in the design of digital products and systems. For years, this dichotomy prevailed in tangible interaction; the embodiment framework we briefly described in Section 2.1; and our own research on the aesthetics of coupling (Section 2.3). With the development of digital technology, the number of digital events in people's daily lives is increasing dramatically. Moreover, digital events are abandoning the traditional, detached display. Instead, they are taking on new forms, such as 2D projections or 3D holograms, which are better integrated into the physical world. As a result, the traditional distinction between the digital and the physical is fading and becoming less important. Together with this evolution, the concept of coupling is also changing. Coupling, which we previously defined as the connection between physical and digital events, can also occur between two digital events, for example between graphics on a display and projected images on a real object. Moreover, coupling should not be reduced to the connection between two events. Our research shows several action routines where coupling occurs between three events, and it is likely that the number of coupled events can be increased.
Secondly, we set the stage for a new taxonomy of couplings. As the amount of events in digital products and systems increases, the design of the coupling between these events, be they digital or physical, product or user-related, becomes more important. We propose to divide couplings into two groups by making the distinction between coupling of time, location, and direction on one hand, and coupling of expression on the other. The first three couplings, which we called couplings of meaning, are related to ease of use and pragmatic usability, while coupling of expression resides in the domain of psychological wellbeing. Bu writing this paper, we want to stress the importance of coupling in the practice of industrial and interaction design. It is our aim to establish the concept of coupling as a full-blown design theory, just like 2D and 3D composition, color theory, and affordance theory.
Lastly, from our point of view, the speculative view that we formulated in Section 1.2, i.e., the strong-specific workplace, opens up new possibilities for the design of spatially augmented workplaces. Throughout the three demonstrators, we designed a workplace that was fully tailored to a limited number of tasks. This allowed us to design the projected images in conjunction with the physical workplace itself. The potential benefits of this approach are best reflected in the design of the supports. In all three demonstrators, the design of both the supports and the projected images on them were created simultaneously by the same designer. As such, the physical shape of these supports and the projection onto them were allowed to influence each other, to the point where both were designed as a single system. As a result, the supports have multiple physical reference points that channel projected images. These physical reference points are persistent, meaning that they are always present and provide the operator with information about the projected images on the support. Even when there is no projected image present, the operator knows where on the support it will appear. The idea of imposing physical restrictions on projected content may seem counterintuitive, given the innate freedom of projection. However, we advocate this approach, because we believe that a workplace that physically channels its projected content makes that content more structured and predictable for the operator working within it. This approach might reduce the operator's chance of missing a projected message and contribute to his/her sense of control over the workplace.

Future Research
The work described in this paper opens the door to further research. In the demonstrators we built for Audi and Kuka, we encountered different types of events: physical, on-screen, projected, and cobot events. With the advent of head-mounted display-based AR, holographic objects emerge as a new event type. As holographic objects are not tied to a display or projection surface, their coupling possibilities with other event types offer a great deal of design freedom, and form a new and promising research space.
The dual approach to coupling that we established in this paper can be further elaborated. Where does meaning end, and where does expression begin? How can coupling of meaning support coupling of expression and the other way round?
Finally, this research is situated in the field of industrial workplaces. Further research is needed to show that the result of this work-the coupling framework-is relevant to a wider application area: digital products and systems in general.
Given the exploratory nature of this future research, we believe that Research through Design is a valuable method to tackle this research gap. We hope that this paper will inspire design researchers and design students to adopt this method and put it into practice.