1 Introduction

Teams have become the strategy of choice to cope with complex and difficult tasks (Salas et al. 2008), requiring several abilities for effective teamwork (Salas et al. 2005; Cooke et al. 2013). Among these, the ability to establish and maintain common intent through communication, socialisation, internalisation, and externalisation is at the core of coordinated actions (Pigeau and McCann 2006). Entering an era where unmanned aircraft are envisioned to act as human-like wingmen, enabling human and synthetic teammates to establish and maintain common intent is crucial for fluent manned–unmanned teaming (MUM-T); however, such an ability poses numerous challenges. For instance, situations with inherently high stakes may unfold rapidly, requiring coordination with limited or no communication. In such situations, synthetic wingmen should recognise or be aware of their human partners’ intent in order to act in congruence with expectations. To this end, there is a need for adequate models of fighter pilot intent to enable synthetic wingmen to reason about what their human partners are doing, why they are doing that, and how they will proceed (Albrecht and Stone 2018; Sukthankar et al. 2014). Such an ability can enable synthetic wingmen to reactively detect and resolve conflicting intent, adaptively adjust the need for command and control, proactively prepare for possible events (Geddes and Lizza 2001), reduce computational burden, minimise miscommunication and uncertainty (Howard and Cambria 2013); and foster partnership (Suck and Fortmann 2016).

One stepping stone towards such abilities is to make fighter pilots’ intent explainable by investigating why intent is formed, what it comprises, and how it may be enacted. To this end, modelling intent from a human-centric perspective becomes crucial since this may provide insights regarding required abilities and information to make intent sufficiently explainable. Unfortunately, there is a lack of modelling approaches that can support the description and analysis of intent as well as models of intent that can inform the design and development of future systems. This study aims to bridge these gaps by exploring an approach to modelling intent from a human-centric perspective to identify design requirements enabling synthetic wingmen to understand their human partner in future MUM-T situations.

2 Manned–unmanned teaming

Teams have traditionally been characterised as two or more members with distinct roles within a defined team boundary, interacting interdependently with each other and the environment towards shared goals (Benishek and Lazzara 2019). With advances in artificial intelligence, machine learning, and cognitive modelling, hybrid (or mixed) teaming vernaculars, such as human–robot collaboration (Bauer et al. 2008) and human–machine cooperation (Hoc 2000), transition from merely conceptual to increasingly practical and applicable (O’Neill et al. 2020). As such, there is a possibility for a new interaction paradigm where embedded or embodied synthetic agents are perceived as teammates rather than tools (Rix 2022; Wynne and Lyons 2018).

In aviation, this paradigm shift can be seen on the horizon as unmanned aircraft have become the prime candidate to operate in conjunction with manned aircraft in difficult and dangerous missions (Sadraey 2018), expected to act as human-like wingmen by the year 2040 (Department of Defence 2018; Jordan 2021; Endsley 2015). In such MUM-T, fighter pilots are expected to be responsible for supervisory and general planning functions, while synthetic wingmen are responsible for performing the vast majority of tasks related to target identification, tracking, threat prioritisation, and post-attack assessment. In this way, the conceptualisation of MUM-T envisions that combining the inherent strengths of manned and unmanned aircraft provides opportunities for operational synergies.

Although MUM-T provides new operational opportunities as an instantiation of hybrid teaming, they also share interconnected challenges. For instance, two literature reviews related to hybrid teaming identified shared situation awareness, shared mental models, trust, team coordination, and workload as persistent challenges (O’Neill et al. 2020; Baltrusch et al. 2022). Addressing these challenges, researchers have suggested bi-directional (or teamwork) transparency (Chen et al. 2018; Holder et al. 2021), bi-directional communication (Marathe et al. 2018; Schneider and Miller 2018), and various control strategies (Chen et al. 2014; Roth et al. 2019).

Although these challenges are related in various ways, one common denominator is the notion of intent. For instance, communicating intent and the reasoning behind it has been argued to increase shared situation awareness, engender trust, and reduce workload (Chen et al. 2018; Marathe et al. 2018; Schaefer et al. 2017). That said, communicating intent is not unproblematic. For instance, communication is associated with several costs for maintaining a shared understanding (Clark and Brennan 1991), particularly since humans and synthetic agents do not share a language (Schelble et al. 2020). Furthermore, MUM-T may pose additional challenges not prevalent in other hybrid teaming contexts. For instance, as a combination of manned and unmanned aircraft systems, command, control, and communication rely on data link availability and the autonomy of the synthetic wingman (Stansbury et al. 2009), both of which are problematic by themselves. For instance, data link availability is affected by numerous causal and intentional factors (Meyer and Schulte 2020; Okcu 2016). Indeed, there will always be situations when unavailable data links are unavoidable (Neale and Schultz 2007), thus requiring synthetic wingmen to operate autonomously with potentially lethal capabilities (Warren and Hillas 2020). In such situations, it becomes essential for synthetic wingmen to be able to recognise or be aware of their human partners’ intent as a means to act in congruency with expectations.

3 Making sense of intent

People tend to explain behaviour in mentalistic terms, often relying on a folk-psychological model in which the notion of intent is central (Bratman 1987; Dennett 1987; Malle and Knobe 1997). However, this notion poses two problems: how can people understand others’ intent and predict their actions from such folk-psychological models, and what building blocks do such models comprise? To answer these questions, it is important to differentiate between intent as ascription and intent as description.

3.1 Intent as ascription

Ascribing intent refers to the ability to recognise (Han 2013) or the process to become aware of (Heinze 2004) others’ intent, often by attributing mental states to others in order to coordinate in the social world. For instance, people often adopt an intentional stance toward others as a strategy to understand and predict their behaviours (Dennett 1987); this includes robots under certain conditions (Perez-Osorio and Wykowska 2020), particularly when framing interactions as collaborative (Abubshait et al. 2021). When adopting such a strategy, people engage in a mentalising process in which they infer the mental states of others, where perspective taking, knowledge of the world, and anticipation of the future are important contributors (Frith and Frith 2006) to develop a Theory of Mind representing a system of mental state inferences of others (Premack and Woodruff 1978). Recognising that the fighter pilot is not alone in MUM-T, synthetic wingmen ought to have the same ability for more natural interaction (Scassellati 2002), for instance, by detecting and resolving conflicting intent (Geddes and Lizza 2001; Vanderhaegen 2021). To this end, it becomes important to investigate the aspects and elements of the Theory of Mind, or mental model (Tabrez et al. 2020), which makes intent sufficiently explainable.

3.2 Intent as description

Describing intent refers to the process of analysing the formation, representation, and execution of intent (Heinze 2004), which, from an ascription point of view, should explain why they are formed, what they comprise, and how they are (to be) enacted. To this end, it becomes crucial to understand what is supposed to be described and analysed. In this regard, according to Cohen and Levesque (1990), intent is choice with commitment, referring to Bratman’s (1987) influential framework of practical reasoning. In this framework, Bratman (1987) takes a functionalistic approach and describes intent as an underlying structure comprised of deliberate choices characterised by commitment, with the functional roles of coordinating further choices and guiding actions personally and socially. In simpler terms, this idea states that intent comprises a set of action-related commitments, or decisions with a degree of persistence, that can be coordinated personally and socially to make them coherent and actionable.

Although the above emphasises deliberate choices, to explain intent, it is also important to know what types of choices are deliberated. In this regard, although hybrid teaming and artificial intelligence communities often treat intent as synonymous with goals (Lyons et al. 2021; Van-Horenbeke and Peer 2021; Sukthankar et al. 2014), others have argued for a broader view. For instance, Pigeau and McCann (2006) define intent as an aim or purpose with all of its associated connotations. In this sense, intent comprises all necessary information for teams to establish common intent and achieve coordinated action in nominal as well as off-nominal conditions. In this regard, some information to explain intent includes the purpose and objectives of the mission, constraints, plans, and actions (Geddes 1994; Klein 1999; Schneider and Miller 2018). In a similar vein, Schneider et al. (2022) suggested that information related to intent (as a goal) can be decomposed in terms of why, what, and how. For instance, why a specific goal is chosen, what the plan to achieve the goal looks like, and how the plan can be implemented given situated constraints. These aspects of intent resemble the six levels of cognitive control described in the Joint Control Framework (Lundberg and Johansson 2021), which are described and exemplified below (see Fig. 1).

Fig. 1
figure 1

Levels of cognitive control as described by Lundberg and Johansson (2021)

From an intentional point of view, subjects (e.g., humans) adopt frames (explanatory structures) and elaborate these with available data, information, and knowledge to ascribe meaning to situations or problems (Klein et al. 2007; Minsky 1975). For instance, how fighter pilots understand a mission based on its description and experience. From the ascribed situation (or problem), a state to be achieved or maintained through effect goals may manifest, providing subjects with both purpose and objectives. Here, effect goals can be further distinguished as either core or instrumental goals (or values), describing which effect goals are most important and the effect goals to accomplish these (Lundberg and Johansson 2015, 2019). For instance, in the context of MUM-T, core goals define the purpose and objectives of the mission (see also Schulte 2002). Values describe considerations in the ascribed situation, such as the criteria measures, priorities, and trade-offs. For instance, how well specific courses of action meet defined mission criteria given the current priorities and trade-offs; or how rules of engagement impose constraints on the mission. Courses of action describe generalised plans and specific procedures; the former is viable within similar situations, whereas the latter is viable within similar contexts. For instance, a mission plan may only be viable in a specific situation, whereas landing procedures can be used in various situations within the same context. Control activities describe how courses of action are implemented, such as the timing of actions under situated constraints. For instance, the fighter pilot may perform various concurrent control activities with different priorities (Anderson et al. 2018). Finally, as subjects engage in control activities, different physical objects, and their attributes manifest, providing physical means for actions and interactions.

3.3 Modelling intent

Several artificial intelligence communities focus their research on enabling synthetic agents to reason about human behaviour for more fluent interaction. For instance, intent, plan, and activity recognition is a common trio researching techniques enabling synthetic agents to reason about human behaviour (Freedman and Zilberstein 2019; Sukthankar et al. 2014; Van-Horenbeke and Peer 2021). In the context of hybrid teaming, these research communities have shown good performance using techniques such as graph embeddings (Reily et al. 2022), recursive Bayesian filtering (Jain and Argall 2020), and naïve utility calculus (Miranda and Garibary 2022). However, recognising that these communities tend to focus their research on parts of the problem has highlighted the need for a more holistic approach (Van-Horenbeke and Peer 2021; Hiatt et al. 2017). For instance, using machine learning models to recognise particular behaviours and knowledge-based models to compare these behaviours with an ontology of how tasks are typically done (Hiatt et al. 2017). However, the process of designing such knowledge-based models is rarely described in research (Norling 2008). In addition, for the purpose of design, selected artificial intelligence techniques should be decided upon after identifying the required abilities and information.

Recognising the lack of approaches describing how to design knowledge-based models and identifying required information and abilities, human factors methods based on Cognitive Task Analysis (Norling 2012) and Cognitive Work Analysis (Lui and Watson 2002) has been suggested. From a design perspective, Cognitive Work Analysis, particularly Work Domain Analysis (WDA), has also been suggested for modelling intentional systems (Elliott et al. 2000) and identify intent requirements (Leveson 2000). Such approaches have been argued to answer questions such as ‘what information do humans use when forming intent?’ and ‘what intent they have?’ when specifying requirements for system design and development. More recently, Vanderhaegen (2021) suggested a heuristic-based method for discovering intent conflict with respect to competency, availability, and possibility to act.

4 The present study

The over-arching purpose of the current study is to support the design of future combat aircraft systems, aiming to explore WDA as approach to model fighter pilot’s situated intent in a MUM-T context. From the purpose and aim, two research questions are addressed: (1) ‘What was learned from using WDA as a modelling approach?’; and (2) ‘What requirements was identified enabling synthetic wingmen to reason about its human partners intent?’ These research questions address methodological and empirical considerations that can support the enablement of systems to reason about human behaviour, particularly intent in the context of MUM-T. More specifically, the first research question addresses the practicality and applicability of using WDA to describe and analyse intent, whereas the second research question seeks to identify information and abilities necessary to reason about fighter pilot intent.

The current study follows the guidelines described by Naikar (2013). Although WDA typically represents the target system in terms of five levels (Purpose, Value and priority measures, Purpose-related functions, Object-related processes, and Physical objects), the six levels of cognitive control were used in this study (see Fig. 1). This methodological choice was motivated by the theoretical building blocks that make up the levels of cognitive control described in the Joint Control Framework (Lundberg and Johansson 2021) and the assumption that intent can best be explained in terms of why, what, and how. Thus, instead of representing a socio-technical system, the model represents a subject’s intent space with all of its situated choices that can be considered and committed through means-ends reasoning.

5 Method

When modelling intent from a human-centric perspective, the models must be populated with knowledge in which intent is implicated. In the current study, this was done in three phases: knowledge acquisition, knowledge encoding, and knowledge representation.

5.1 Knowledge acquisition

Knowledge acquisition entails the process of obtaining domain-specific knowledge in which intent is implicated. Typically, WDA includes document analysis, interviews, or observations (Naikar 2013). The current study used literature review and interview methods to obtain domain-specific knowledge. These are described in more detail in the following subsections.

5.1.1 Literature review

Reading and analysing text is often essential for understanding the domain (Hoffman et al. 1995) and can be useful for designing preliminary models (Naikar 2013). In this study, an integrative literature review was conducted. Such reviews are useful for critiquing and synthesising research, often resulting in initial or preliminary conceptualisations and theoretical models (Snyder 2019; Whittemore and Knafl 2005). In the following subsections, the literature search and selection phases are described (see Sect. 5.2 Knowledge encoding for analysis).

5.1.1.1 Literature search

Documents related to the target system describing the purpose and functions of a system are essential sources of information (Naikar 2013). One such document type is work domain models describing the target system, which have been argued to share similarities in content and structure (Burns et al. 2004) and to be helpful for comparing complex socio-technical systems (St-Maurice and Burns 2018). For these reasons, simple searches related to work domain models and unmanned aircraft were conducted in the Web of Science and Scopus databases. Search queries were formed by combining words according to “[unmanned, uninhabited] AND [aerial, aircraft]” AND “[abstraction hierarchy, work domain analysis]”. Owing to the few results, the same search queries were used in Google Scholar.

5.1.1.2 Literature selection

Because integrative literature reviews allow for various types of texts (Snyder 2019; Whittemore and Knafl 2005), research papers and reports were accepted. From the search process, fourteen results met the criteria of describing content related to unmanned aircraft systems. These were further evaluated (e.g., purpose and level of analysis), resulting in a sample of eight documents being included for further analysis. Table 1 describes the motivation for including each reference.

Table 1 Selected papers in the literature review

5.1.2 Subject matter expert interviews

Interviews are valuable when modelling intent from a human-centric perspective, particularly since interviewees often express themselves in mentalistic terms making encoding and representation easier (Norling 2012). For instance, when subjects being modelled are asked to think about a problem, they tend to explain their actions in terms of their intent, which in turn is explained in terms of their beliefs and desires. In addition, interviews are often suitable for refining preliminary models (Naikar 2013). For these reasons, seven semi-structured subject matter expert interviews in the context of a reference scenario were conducted, focusing on the transfer of control and link loss situations (see Sect. 5.4 Reference scenario).

5.1.2.1 Participants

Subject matter experts are valuable sources of information when conducting WDA, particularly when consulting sources with different backgrounds that can provide information from different perspectives (Naikar 2013). In this study, seven Swedish subject matter experts (P1–7) participated, including four experienced fighter pilots (P1–4), one experienced GCS operator (P5), and two specialists with expertise in unmanned aircraft operations (P6,7).

5.1.2.2 Procedure

Seven one-hour semi-structured interview sessions were conducted, mixing the scenario story and basic probes from Critical Decision Method (O’Hare et al. 1998) and Applied Cognitive Task Analysis (Militello and Hutton 1998) to elicit and capture the participants’ reasoning and intent. To further support this process, low-fidelity simulation was used by drawing and moving objects on the surface of large paper sheets. Throughout the interviews, the participants enacted the role of the fighter pilot, while the interviewer, in a sense, acted as a roleplaying leader, progressing the story through the scenario events). The data were collected as scribbles, which, together with paper sheet drawings, served as reminders for exhaustive summaries and further analyses (see Sect. 5.2 Knowledge encoding).

5.2 Knowledge encoding

Knowledge encoding entails the process of exploring and discovering as well as structuring and organising data knowledge in which intent is implemented. In this study, a thematic analysis was conducted, motivated by its flexibility (Braun and Clarke 2006). In particular, a theory-driven thematic analysis was conducted, in which the six levels of cognitive control described in the Joint Control Framework (Lundberg and Johansson 2021) guided the coding and mapping of content in the work domain models and interview summaries.

The codes were used to identify emerging themes. The identified themes were described and revised throughout the process to guide the identification and categorisation of content. Although these two analyses were conducted per data source, the identified themes were used to inform and complement each other. Figure 2 exemplifies how content from the described work domain models (#1–2) and interview summaries (#3–4) was coded and mapped. As illustrated in the figure, both explicit and implicit intent were coded and mapped from the examples.

Fig. 2
figure 2

Examples of how content from work domain models (#1–2) and interview summaries (#3–5) were coded and mapped to the six levels of cognitive control

5.3 Knowledge representation

Knowledge representation entails the process of describing and analysing knowledge in which intent is implicated. In other words, the design and analysis of intent models. In the current study, three models were designed, each representing choices that fighter pilots can consider and commit themselves to in the different target situations (i.e., reconnaissance mission, transfer of control, link loss). The first model was designed by mapping identified themes (i.e., elements of intent) to the appropriate level of cognitive control (i.e., frames, effects, values, generic, implementations, and physical) independent of the target situation. Using the first model as a basis, the second and third models were designed and analysed by mapping choices considered and committed to the appropriate situation (i.e., transfer of control or link loss) and level of cognitive control. Thus, in these cases, the models are solely based on interviews representing fighter pilot intent structures that emerged through means-ends reasoning in sub-situations within the context of the reconnaissance mission.

5.4 Reference scenario

Scenarios are often presented as stories of hypothetical futures and can be useful in identifying requirements for transitioning from here to there (Bishop et al. 2007; Börjeson et al. 2006). In addition, scenarios can be used as a strategy to confirm that the analyst is on the right track by mapping actors’ patterns of actions to the designed work domain model (Burns et al. 2001), particularly when using challenging situations to uncover the constraints that shape decisions (Naikar 2013).

In this study, a scenario with two use cases was iteratively designed in collaboration with subject matter experts. The scenario centred around a future reconnaissance mission involving MUM-T in a national state between peace and war, which was expected to be satisfactorily complex as it involved unknown entities and potential threats (Theissing and Schulte 2013). Because the data link availability is an Achilles heel in MUM-T, it was decided to include a transfer of control and a link loss event, representing one anticipated and one unanticipated situation. These two use cases are the target situations for investigation in this study. The context of the scenario was described as occurring around 2040 after sightings of potentially hostile sea craft outside the national territory. Based on the information from these sightings, a reconnaissance mission involving MUM-T was to be executed. The purpose of the mission was described as to gather intelligence, with the objectives being to find, identify, and report potential hostile threats in a defined area of interest. Furthermore, generalised flight plans were described (see Fig. 3).

Fig. 3
figure 3

Generalised flight plans used as part of the reference scenario

The scenario centred around a story by playing out a sequence of events (see Fig. 4), in which the fighter pilot and the synthetic wingman take off from separate locations. Although the fighter pilot (a) approaches a rendezvous point, the synthetic wingman (b) is expected to be already loitering near this position while being controlled by a Ground Control Station (GCS) operator. To continue the mission, (c) a transfer of control was planned, in which the GCS operator would hand over control of the synthetic wingman to the fighter pilot. After receiving control of the synthetic wingman, the fighter pilot and synthetic wingman enter a new team configuration, (d) tether into a pairwise formation, and (e) approach the area of interest. During the reconnaissance of the area of interest (f), the fighter pilot would perform a combat area patrol and lead the synthetic wingman, whereas the synthetic wingman would enter the area of interest to find, identify, and report potential sea craft. During the reconnaissance, (g) an unexpected link loss event occurs, leaving the team without any means to share information or communicate.

Fig. 4
figure 4

Overview of events in the scenario and use cases

5.4.1 Transfer of control

Transfer of control (handover) can be understood as a process in which authorised subjects (i.e., controllers) re-allocate the management of the unmanned aircraft and has been associated with asynchrony, mode error, and miscommunication (Department of Defence 2014, 2018; Hobbs and Lyall 2016). As such, it includes both technical and human factors challenges; for instance, data links between authorised control stations must be established and maintained under various conditions throughout the process, necessitating interoperability as well as maintaining within the required performance boundaries (Okcu 2016). Furthermore, operators (e.g., fighter pilots and GCS operators) must be able to recognise discrepancies in expectations in order to coordinate the process fluently and effectively.

5.4.2 Link loss

Managing data link availability is a central issue for controllability and information acquisition when working with unmanned aircraft (Meyer and Schulte 2020; Okcu 2016). For instance, data links are susceptible to technical limitations (distance, line-of-sight, weather, power, and failures), hostile interference (jamming), and tactical considerations (radio silence). Consequently, considering the numerous aspects that can affect data links, it is not reasonable to immediately execute a contingency plan in all situations as this would negatively affect mission efficiency and effectiveness.

6 Results

As part of the modelling process, this section describes the results from the encoding and representation phases. Beginning with the encoding phase, themes from the literature review and interviews are presented as elements of intent. This is followed by presenting the designed and analysed models.

6.1 Elements of intent

Divided by the six levels of cognitive control, the following describes and exemplifies the themes, or elements of intent, that emerged from the thematic analysis of the literature review and interviews. The following elements of intent are represented and exemplified in bold and italic font styles. Additionally, overarching categories are underlined, and asterisks (*) denote elements of intent where interviews have uniquely provided intent content.

6.1.1 Frames

As the target situation, this level of control includes the adopted frame named Reconnaissance mission. Although target situations were not used in reviewed models, implied and related examples are reconnaissance and guidance mission and search and rescue mission. Participants typically referred to the scenario context when ascribing the reconnaissance mission situation. For instance, P3 noted that the national state between peace and war would affect acceptable risks, emphasising anticipated entities with their associated attributes (e.g., position, type, risk level). Furthermore, it was recognised in the literature review that such missions could occur as part of a larger context. For instance, achieving joint mission and conduct of the joint operation or campaign. This notion was corroborated during interviews as P2 noted that a reconnaissance mission could be part of a larger effort, influencing their team effort.

6.1.2 Effects

At the effects level of cognitive control, three elements of intent were identified: Achieve mission, Uphold safety, and Uphold security*. In this context, these should be understood as the mission’s Core goals. From the interviews, P1, 3 note that these goals should be balanced according to the Commander’s intent. For instance, P1 noted that a reconnaissance mission during peacetime does not typically involve intentional threats or vital information, thus prioritising Uphold safety over Uphold security* and Achieve mission.

Achieve mission includes both the purpose and objectives of the mission, thus describing the motivation behind the mission and an image of the intended state of affairs or outcomes. In the literature review, typically, only the purpose was described (e.g., protecting friendly airborne systems, defend against intruding air forces, support friendly forces, deter enemy attack and achieve attack mission). During interviews, both the purpose (gather intelligence) and objectives (find, identify, and report entities in the area of interest) were pre-defined as part of the reference scenario.

Uphold safety and Uphold security* refer to the management of unintentional respective intentional risks, with intent content at the individual, team, and national levels. Uphold safety can be exemplified by maintain system integrity, flight safety, own forces safety, and civilian safety. In contrast, Uphold security can be exemplified by threat avoidance and mitigation, prevent fratricide, minimise friendly causalities, and minimise collateral damage. In interviews, participants often expressed Uphold safety in terms of instrumental goals. For instance, P1–3 noted that they must maintain a safe separation between aircraft. Likewise, P5 expressed that a safe landing or controlled crash is essential in the case of aircraft failure to prevent or mitigate threats to the civilian population and infrastructure. Uphold security* was expressed during interviews; for instance, participants expected the synthetic wingman to take higher risks to increase the fighter pilot’s survivability. For instance, P3 noted that he would keep his distance from potential threats in the area of interest to avoid threats by letting the unmanned aircraft take the majority of risks, while P4 noted the need to protect valuable information carried by aircraft and data links. These latter goals were uniquely identified from the subject matter interviews.

6.1.3 Values

At the values level of cognitive control, four elements of intent were identified and categorised as either measuring (or indicating) performance or constraint-related values.

Regarding the Performance values category, a distinction is made between Mission performance and Team performance*. Mission performance refers to qualities measuring the mission’s current or expected state of affairs or outcomes in relation to Commander’s intent, exemplified by effectiveness and efficiency in the literature review. During interviews, this was confirmed; for instance, P5 noted that he needed to track the progress during the reconnaissance mission. Team performance* refers to qualities associated with taskwork and teamwork at the individual and team levels. At the individual level, situation awareness was prevalent in literature review and interviews; however, intent awareness, trust and workload were uniquely identified in interviews. For instance, participants emphasised that they must understand and predict the synthetic wingman’s behaviours; otherwise, they would not trust it and would possibly control it themselves. At the same time, participants also noted that they would not want to control the synthetic wingman due to the increased workload. For instance, P3 noted that his primary role would be to uphold situation awareness of the area of interest while delegating tasks to the synthetic wingman to relieve the workload. Similarly, P5 noted that his workload could be excessive and expressed a disinterest in constantly monitoring and controlling the synthetic wingman. Instead, he would prefer such tasks only during certain events, such as newly detected and identified sea craft and changes of intent content. Intent content at the team level was also uniquely identified in interviews, exemplified by common ground and common intent to uphold shared (or team) situation awareness (see also 6.2.1. Transfer of control).

Regarding the Constraint values category, a distinction is made between causal and intentional constraints. Causal constraints refer to imposed descriptive values (or ‘laws of nature’) and can be exemplified by adhere to physics-based constraints and physics of flight in the literature review. From interviews, distance is a prominent example of intent content to maintain safe and secure separation from unintentional and intentional threats and uphold data links. In contrast, Intentional constraints refer to imposed normative values (or ‘laws of men’). Examples from the literature review are regulations, adhere to rules of engagement and Commander’s intent. These were also confirmed during interviews; for instance, P2 emphasised the need to follow plans as defined by the Commander’s intent.

6.1.4 Generic

At the generic level of cognitive control, Plans, Procedures*, and Contingency structures* were identified as types of Courses of action.

Plans refer to the generalised course of action at the individual and team levels, exemplified by flight plan, search plan, single mission plan, and joint mission plan in the literature review. A similar notion was identified during interviews as P1,2 noted that they needed to know their respective flight plans as well as the synthetic wingman’s search plan. Furthermore, roles and responsibilities were uniquely identified in the interviews. For instance, P3 noted that team members often divided responsibilities to uphold situation awareness of either air or ground during reconnaissance missions, while P2 expressed that he must know who is responsible for maintaining separation between the aircraft. Furthermore, this element of intent also includes planning, exemplified by mission planning and search plan creation/update; however, interviews suggest a reluctance to own planning and expect plans to be pre-defined. Procedures* describe specific actions to take in order to accomplish tasks. While the literature review did not provide examples, the interviews suggest that fighter pilots often rely on standardised procedures (see also 6.2.2 Transfer of control). Contingency structures* refer to alternative courses of action (e.g., contingency plans, what-if actions) in defined situations and are typically communicated before the mission. While the literature review did not reveal any contingency structures, interviews suggest it is common to use in the domain and essential for managing off-nominal conditions (see also Sect. 6.2.3. Link loss).

6.1.5 Implementation

Six elements of intent related to control activities were identified at this level of cognitive control, categorised as either Piloting activities or Mission activities. Piloting activities include Aviate, Navigate, Communicate and Manage, whereas Mission activities include Retrieve/Deliver payload and Gather/Broadcast data/information. Because these elements of intent were identified in both the literature review and interviews, the following described examples were identified in the literature review alone (see 6.2. Model analyses for interview examples).

Concerning Piloting activities, Aviate refers to controlling the aircraft’s flight (e.g., path, attitude, speed) and avoiding hazards (e.g., collisions), exemplified by aviate, pilotage, and directing flight. Navigate refers to the control of the aircraft in accordance with Plans (flight plans, search plans) and can be exemplified by navigation process, achieve navigation goals, recon planning, and plan(s) management. Communicate refers to a joint activity in which subjects coordinate information (e.g., belief content, intent content) to establish common ground and common intent to facilitate a shared awareness of the situation and intent. Examples from the literature review include information communication, communicate intent, and coordination with surface support. Manage refers to the control of aircraft systems (including errors, failures and breakdowns) and resources and can be exemplified by systems management, control of sensors systems, and malfunction handling.

Concerning Mission activities, Retrieve/deliver payload refers to the control of payload systems and payload, exemplified by payload (and payload platform) management and lethal weapons management. Gather/broadcast data/information refers to control tasks in which data or information is collected, analysed, and/or (re-)distributed. Collecting information can be exemplified by data capture, information acquisition, and intelligence gathering. Regarding analysed data or information, this element of intent can be exemplified by data/information analysis and information integration. Lastly, data or information re(broadcasting) can be exemplified by broadcast data/information, disseminate/report, and relay information.

6.1.6 Physical

At the Physical level of cognitive control, eight elements of intent representing physical objects (and their associated attributes) were identified and categorised as related to the Manned-unmanned aircraft system or the Environment.

The Mannedunmanned aircraft system, in general terms, refers to the team and its physical resources. More specifically, it comprises, but is not limited to, the Manned aircraft and Unmanned aircraft with on-board pilotage system (e.g., flight control, navigation systems, communication systems, sensor systems) and on-board mission-specific systems (mission specific payloads and payload systems; payload characteristics); Control stations (ground control station); Up/Down data links (data link); Data/Information (map, no flight zones, system diagnostics, video, intelligence status) carried by aircraft and data links; and the Team members (e.g., fighter pilot, synthetic wingman, GCS operator).

The Environment refers to elements of intent outside the Mannedunmanned aircraft system. Although not a complete list, in general terms, it comprises Physical objects with causal and/or intentional characteristics (e.g., weather, physical form of the environment and objects in environment, obstacles, threat sources and functional characteristics of threat sources).

6.2 Designed and analysed models

Three models of situated intent were designed, representing a fighter pilot’s intent space of the target Reconnaissance mission (see Fig. 5), Transfer of control (see Fig. 6), and Link loss (see Fig. 7) situations. The first model should be understood as a conceptual model, used as a context for the design and analysis of the second and third models.

6.2.1 Target reconnaissance mission situation

Figure 5 represents a conceptual model designed by synthesising identified elements of intent from a literature review and seven interviews, thus corresponding with the encoding phase results. Besides these elements of intent, intent structures emerging from means-ends reasoning are depicted.

Fig. 5
figure 5

Conceptual model representing a fighter pilot’s intent space in the context of a reconnaissance mission involving MUM-T

6.2.2 Transfer of control

From the intent analysis of the target transfer of control situation, a single intent structure emerged and became salient, indicating a common intent among participants (see Fig. 6).

Participants generally framed the transfer of control as a safety–critical and highly coordinative situation, making Uphold safety and Achieve mission salient core goals for the instrumental Transfer of control goal. To this end, participants expected team members to follow flight plans and a standardised transfer of control procedure. For instance, P2 noted he would use the flight plans to direct the aircraft and sensors towards the rendezvous point to locate the synthetic wingman. After locating an aircraft near the rendezvous point, participants expressed that they must ensure Team performance before initiating the transfer of control process, likely through radio communication with the GCS operator. For instance, P1 noted that he would need to ensure the identity of the aircraft corresponds with the synthetic wingman and that its state (e.g., consumable and expandable resources, technical failures) is, and will likely be, within performance boundaries before deciding if and how to proceed (e.g., abort the mission or adjust plans). As such, Communication is used to establish and maintain common ground and common intent by coordinating belief content and intent content, providing means to uphold shared awareness of the situation and intent. During the transfer of control process, participants also noted they would need to monitor the data link qualities (e.g., strength) and handover status. Furthermore, it was emphasised that they would need to be able to cancel the transfer of control process as well as accept or reject the handover before receiving control of the synthetic wingman. For instance, P2 noted that other tasks might become prioritised to ensure flight safety. After deciding to accept the control of the synthetic wingman, participants noted that they needed to confirm the success of the transfer of control process and the state of the synthetic wingman before continuing the mission.

Fig. 6
figure 6

Model representing a fighter pilot’s intent space with mapped intent structure in the target transfer of control situation

6.2.3 Link loss

Although re-establishing data links emerged and became the default intent, in situations when this cannot be achieved, three different intent structures became salient from interview data (see Fig. 7). These intent structures can be understood as representing the intent profiles of three different personas (PA–C), in which the intent profile PA, PB, and PC are represented by solid green, dashed blue, and dotted red lines, respectively.

Generally, PA (P1–3,5) characterised the link loss situation as within the boundary of acceptable risk and would not warrant any change of intent. For instance, P1 noted that link loss could be expected in certain conditions (e.g., weather, distance, line-of-sight). As such, PA commits to continue the Reconnaissance mission as long as the synthetic wingman seems fine and follows its search plan, implying that upholding intent awareness of the synthetic wingman is essential in the ascribed link loss situation. To this end, P1-3 noted that they must track the synthetic wingman, likely by managing the radar to detect plan discrepancies between the synthetic wingman’s expected and actual position and trajectory. Following this, participants also noted that deviation from plans or irregular flight patterns would indicate something is wrong and be a reason for a change of intent.

In contrast to PA, PB (P6,7) and PC (P4) expressed that they would abort the reconnaissance mission; however, for different reasons. Generally, PB characterised the link loss situation in terms of safety risks, requiring the team to abort the mission to protect the population and infrastructure, thus prioritising flight safety over achieving the mission. To this end, deciding to enact a contingency plan is the usual choice. For instance, P6 noted that the synthetic wingman should abort the reconnaissance mission and perform a controlled landing or crash at designated airfields or crash zones to prevent damage to people and property. Contrary to PB, PC characterised the link loss situation in terms of information security risks and a need to protect valuable information carried by the synthetic wingman. To this end, the decision to destroy the synthetic wingman is a viable option, prioritising the value of information over the synthetic wingman. Consequently, a feasible option is committing to a kill chain procedure to find, fix, track, target, and engage the synthetic wingman and assess the outcome.

Fig. 7
figure 7

Model representing a fighter pilot’s intent space with three personas (PA–C) and their respective intent structures in a link loss situation

7 Discussion

Addressing the research questions in this study, this section begins by describing some experiences using WDA to model situated intent from a human-centric perspective. This is followed by highlighting identified requirements for enabling synthetic wingmen to explain their human partners’ intent.

7.1 Work domain analysis as an approach to model intent

To address the first research question, some experiences using WDA as an approach to model intent are highlighted. First and foremost, due to the approach focusing on the constraints that shape intent; it provides formative models comprised of choices that can be deliberated and committed. This is advantageous since the work in this study showed that such formative models could be re-utilised in similar target situations, particularly since they can be extended and constrained by mapping contextual and situational intent. For instance, following modelling principles (Burns et al. 2001, 2004; St-Maurice and Burns 2018), a synthesised model was designed by integrating content from literature review and interview sources. By the same principles, by being situated in the context of the reconnaissance mission, the transfer of control and link loss models could be constrained to a subset by using the synthesised intent model as the basis. Consequently, the approach provided reusable and scalable models in the study, which are important qualities for modelling intentional systems that often suffer from their scope (Hajdukiewicz et al. 1999). As such, this study demonstrates that it is possible to re-utilise these models to compare and transfer knowledge of different target situations of investigation when modelling fighter pilot intent in the context of MUM-T.

Additionally, experiences from analysing the designed models showed the approach was practical, particularly since it helped focus attention on what choices are considered and committed through means-end reasoning to form intent structures. Albeit phrased differently, this corresponds well with the notion that probing the means-end relations can provide insights regarding the constraints that shape experts’ decisions (Naikar 2013). This was particularly valuable when analysing the target link loss situation since it–not only allowed identifying inconsistent and conflicting commitments between participants–but also helped explain these commitments through the means-ends relations. Indeed, it can be argued that it is the connections between the elements of intent that, in a sense, explain intent. On this point, it was also helpful to visually represent intent structures in the model as this shows where decisions diverged, thus indicating how inconsistent or conflicting intent can emerge at one level of cognitive control and propagate to other levels.

7.2 Making intent explainable

Addressing the second research question, this section highlights requirements that should be considered to make intent explainable for synthetic wingmen, focusing on information and associated abilities at the higher levels of cognitive control.

7.2.1 Context and situation

Results show that both contextual and situational information is used when forming intent. For instance, when expressing intent about the reconnaissance mission scenario, participants often referred to the scenario context. Similarly, references to the reconnaissance mission context were prevalent when expressing intent concerning the target transfer of control and link loss situations. These findings corroborate intent, plan, and activity recognition research (Han and Pereira 2013; Van-Horenbeke and Peer 2021) and highlight the need for contextual and situational information. For instance, in what environment the fighter pilot and synthetic wingman are expected to operate and what situations they may encounter. Such information has also been stated as important when designing for bi-directional transparency; for instance, human and synthetic teammates ought to be able to share awareness of environmental constraints (Lyons 2013; Lyons and Havig 2014) as well as provide information about current and future situations (Chen et al. 2018).

7.2.2 Frames and framings

A related issue to contextual and situational information is the notion of frames and framings. For instance, analysis suggests that previously communicated intent contributed to fighter pilots adopting frames in anticipation of situations, thus providing preparedness for events. For instance, the analysis of the anticipated transfer of control situation suggests that fighter pilots use flight plans in order to locate the synthetic wingman, implying they adopt and elaborate the Transfer of control frame in anticipation of the handover event. Similarly, the analysis indicates that the fighter pilot continuously elaborates the adopted Transfer of control frame in anticipation of potential problems as the situation unfolds, thus providing readiness for a change of intent (e.g., abort the reconnaissance mission, adjust plan). Likewise, analysis of the target link loss situation suggests a preparedness for a change of intent as PA elaborates the Link loss frame by tracking the synthetic wingman’s flight pattern to detect deviations.

These are instances of anticipatory thinking, an important metacognitive function in individuals and teams, exemplified by focusing on critical information in preparation for events (Klein et al. 2010). In this regard, using frames, as an explanatory structure that separates a situation from its context (Klein et al. 2007), can be a valuable source for synthetic wingmen to reason about what their human partners are looking for in the preparation of events. For instance, in the context of hybrid teaming, it has been noted that it is often advantageous when synthetic teammates push information in anticipation of situations rather than let the human partner pull it (Demir et al. 2017).

It is also important to consider that adopted frames and their structural content may differ. For instance, the analyses indicate that the core goals of the reconnaissance mission influence the adopted frame and framing of situations as participants typically characterised the transfer of control situation in terms of safety risks. This notion supports the idea that goals and experience affect how humans make sense of events (Klein et al. 2007). It is also reasonable that participants’ assumptions regarding the context (e.g., the national state between peace and war, core goal priorities) and background can partly explain the different framings of the link loss situation. Thus, from a MUM-T perspective, given that pilots may frame situations differently, synthetic wingmen ought to be able to account for–not only their human partners’ situation–but also their framing of the situation. This ability is crucial as the analysis indicates that small variations of the framing of the link loss situation can propagate to other levels of cognitive control, resulting in conflicting or inconsistent intent. Indeed, from a team cognition view, human and synthetic teammates must be able to integrate their respective perspectives for shared mental models and common ground (Cooke et al. 2013; Schelble et al. 2022).

7.2.3 Core and instrumesntal goals

The identified core goals (Achieve mission; Uphold safety; Uphold security) correspond well with Shulte’s (2002) abstract goals (Mission accomplishment; Flight safety; Combat survivability) and Klein’s (1994) model stating that the purpose and objectives should be communicated to understand the purpose of the mission and an image of the desired state. From the perspective of intent–aware synthetic wingmen, these core goals and their relative priorities are likely to be communicated beforehand. Thus, as persistent decisions during missions, and these may be used as stable foundation by synthetic wingmen to infer fighter pilot intent.

That said, the instrumental goals pose challenges as the human and synthetic teammates may need to make independent decisions regarding priorities and trade-offs to adaptively and dynamically achieve or uphold the core goals. This was shown in the analysis of the link loss situation as PA tracked the synthetic wingmen’s flight pattern to detect if a change of intent had occurred; for instance, if the situation required the synthetic wingman to abort the mission or change the plan. In this regard, the ability to understand the performance and constraint values is important to explain why intent is changed. Indeed, values related to performance and constraints were common in the literature review and interviews and help explain a change of intent. For instance, regarding mission performance, P3 prioritises keeping a distance from potential threats to uphold security to ensure being within the acceptable risk. During interviews, team performance values, such as situation awareness, trust, and workload, were emphasised, reflecting common issues in a hybrid teaming context (Chen et al. 2018; de Visser et al. 2020; Baltrusch et al. 2022). In this regard, the ability to measure or detect indications of performance and constraints, such as the fighter pilots’ cognitive and emotional states (Lyons 2013) will be an important contributor for synthetic wingmen to explain fighter pilots’ commitment to instrumental goals. For instance, as described by Vanderhaegen (2021), the synthetic wingman may need to understand if the fighter pilot is sufficiently competent, and is available, to achieve core or instrumental goals within the situation constraints.

7.2.4 Plans and procedures

Although plans were common in the literature review, interviews also highlighted the importance of procedures and contingency structures. For instance, the analysis of the target transfer of control and link loss situations suggests that the fighter pilot uses prior communicated plans to locate the synthetic wingman and detect problems and changes of intent by comparing its current and expected positions and trajectories. Similarly, analysis of the target transfer of control situation suggests a standardised procedure is likely to be followed, reflecting that fighter pilots often rely on communicated or internalised procedures to ensure common ground and predictability (Ohlander et al. 2019) as well as common intent (Pigeau and McCann 2006). Thus, by the same token, it is reasonable that synthetic wingmen should be able to use previously communicated plans, procedures, and contingency structures to detect and identify inconsistent or conflicting intent by measuring the congruencies between the current and expected states. Indeed, measuring the degree of congruence between previously communicated intent may reduce the computational burden since already being aware of commitments eliminates the need to recognise them (Howard and Cambria 2013).

7.3 Limitations and future work

In this study, knowledge acquisition was based on a literature review and seven interviews to design three conceptual models of situated intent in the context of a MUM-T mission scenario. That said, although literature reviews can support the design of conceptual models (Naikar 2013; Snyder 2019), with the small sample size of seven subject matter experts, future work is necessary to corroborate and refine the models in this study. Related to this issue is the notion that interviews can only provide an espoused theory of intent; thus, complementary methods are necessary to validate these theories in action (Argyris and Schön 1974). Moreover, in this study, cases of transfer of control and link loss situations in the context of a reconnaissance mission scenario were of particular interest. Albeit a starting point, future studies should consider variants of these cases and scenarios; for instance, by including situations in which there is a conflict of intent between human and synthetic teammates to investigate how they coordinate intent at a team level.

A limitation concerning knowledge representation, or describing and analysing intent, using WDA as an approach is that it gives a static impression of intent. Although intent, as a commitment to choices, may comprise stable decision structures, these structures can also change with new information (Bratman 1987). This inherent limitation of WDA motivates complementary approaches to represent knowledge in which intent is implicated—for instance, using other phases in the Cognitive Work Analysis or the Joint Control Framework that can represent intent transformations. In addition, the methods supporting the design of various strategies should be explored, for instance by reducing the autonomy workspace (Nylin et al. 2022) or by identifying various conflicts (Vanderhaegen 2021) in hybrid teaming contexts.

8 Conclusions

This study explored WDA as an approach to model intent from a human-centric perspective to identify requirements to design future intent–aware synthetic wingmen. Experiences of using this approach in this study show it is both practical and applicable, particularly since the designed models can be re-utilised to model intent in similar situations. Thus, the designed models in this study can provide a starting point for researchers and practitioners interested in modelling intent for the same or similar purposes. Experiences also show that the approach was useful since it guided the attention to the choices that can be considered and committed through means-ends reasoning, which is at the core of making intent explainable at various levels of cognitive control.

By describing and analysing intent in six levels of cognitive control, requirements concerning information and associated abilities were identified. Among the main findings is the need for synthetic wingmen to account for the various frames that fighter pilots may adopt, particularly since results show that small variations in the framing of a situation can result in inconsistent and conflicting intent. In the end, from the perspective of modelling intent, this study shows abilities are required at all levels of cognitive control, thus necessitating a holistic approach for intent to be explainable by synthetic teammates.