Where have all the equations gone? A unified view on semi-quantitative problem structuring and modelling

Abstract For several decades structural modelling has assisted decision makers with the cognitive burden of exploring and interpreting complex situations. Three well-known techniques – labelled collectively here as semi-quantitative problem structuring and modelling (SPSM) – include ISM (Interpretive Structural Modelling); MICMAC (Matrice d’Impacts Croisés-Multiplication Appliquée à un Classement); and DEMATEL (DEcision MAking Trial and Evaluation Laboratory). SPSM approaches pioneered the joint application of graph-theoretical principles and human-computer interaction. Yet today a template-style research approach prevails, focusing on the application context rather than seeking to advance or critically assess the individual techniques in their own right. This paper develops a unifying methodological view of SPSM, currently missing in the literature, by comparing and contrasting – for each technique – analytical and procedural aspects typically taken for granted. The paper’s findings highlight: (1) Previously unnoticed overlaps between techniques that up to now have been deemed mutually exclusive, and incongruences between those that are often applied jointly; 2) Potential issues that arise when key analytical principles of SPSM are either applied uncritically or dispensed with altogether; 3) The need to leverage human-computer interaction, a prominent aspect in early SPSM research that is now surprisingly neglected. These findings are illustrated by a review of SPSM applications in the context of supply chain risk management.


Introduction
Complexity is back in the headlines due to the societal problems and global disruption associated with the COVID-19 pandemic. But the need to make sense of complex challenges, and deal with them effectively, has been a recurring problem for managers for decades (e.g. Sargut & McGrath, 2011), and decision makers have long been advised to use a system lens to deal with the complexity of interacting societal problems (e.g. Warfield, 1976;WEF, 2013). It is an open debate, however, what level of mathematical formalism is appropriate for imparting structure on systems that are complex and poorly understood. To apply the methods of Management Science and Operations Research (MS/OR), the problems need to be clearly identified so that a shared understanding is possible. In the 1970s and 1980s the MS/OR community was deeply divided on whether its methods could really help address illstructured, interdependent problems on which consensus was often lacking (Ackoff, 1974;Jackson, 2006;Simon et al., 1987).
A 'soft' OR view on this debate would be to dismiss problem solving and mathematical modelling in favour of problem structuring methods (PSM), underpinned by social theories other than positivism (Jackson, 2006;. The argument is that formalised mathematical tools inevitably enforce a unitary viewa single right answeron what makes up the system of interest, how its constituent elements are structured within the whole, and the nature of their interaction (Flood, 1988). Yet at the same timedespite some claims that it is in decline as a methodological framework -PSM is also commensurate with 'hard' OR, and certainly not exclusive of that approach a priori (Harwood, 2019). What is clear is that the methodological rigour and credibility of soft OR is still under debate, as is the scope to include software-based analytical routines in a soft OR approach (Ackermann, 2019;Ackermann et al., 2020).
Structural modelling adopts a hybrid stance between soft and hard OR, using a family of techniques that leverage both graph-theoretical principles and human-computer interaction. These tools help experienced practitioners with the cognitive burden of structuring and interpreting contextual situations in terms of a system (Lendaris, 1980). Under this approach qualitative data elicited from experts is often processed analytically, outcomes are visualised and then interactively played back to them for feedback.
The focus of this paper is a specific subset of structural modelling techniques, labelled collectively here as 'semi-quantitative problem structuring and modelling' (SPSM). SPSM includes the following techniques, which are specifically assessed in this paper: (1) Interpretive Structural Modelling -ISM; 2) Matrix-based cross-impact categorisation or Matrice d'Impacts Crois es-Multiplication Appliqu ee a un Classement -MICMAC; and 3) Structural analysis of the world 'problematique', developed within the Decision Making Trial and Evaluation Laboratory projectcommonly referred to as DEMATEL. Table 1 identifies foundational work for each technique.
These three techniques originated in the 1970sindependently, but around the same timeand now represent a staple element of numerous business and management applications. It seemed appropriate to focus on these well-established techniques with a view to calling into question the commonly held assumption that they differ fundamentallyan assumption that has been used to justify separate research strands for each technique.
The early foundational works on SPSM share a 'soft OR' view: that a purely objectivist notion of problem solving has limitations, and could benefit from the application of social theories such as structuralism and interpretivism (Jackson, 2006). Unlike soft OR, however, the early SPSM work aimed to find synergies between natural language and the language of mathematics and graphs, consistent with a broader notion of systems science (Warfield, 2003). Indeed, ordinary prose is regarded in this early work as a 'Procrustean bed'a scheme or pattern into which something is arbitrarily forced to fitand hence is unsuitable to replace rational analysis in portraying problem situations (Warfield & Staley, 1996).
In recent years more publications have focused on specific managerial application contexts, rather than advancing or critically assessing individual SPSM techniques upfront. Providing a comprehensive literature review across all SPSM techniques is beyond the scope of this paper, but it can easily be ascertained from a quick assessment of the academic literature that a template-style approach to SPSM research is prevalent. The literature on SPSM reveals a tendency to trivialise, or even dispense with, the computational and procedural aspects of SPSM, which thus remain largely unappreciated and underplayed. It is also apparent that over the past two decades SPSM applications have mutated into 'shortcut' surrogates for survey research; thereby losing much of its original intentto support challenging managerial decisions in the face of complexity. The literature on SPSM often confuses it with, or regards it as ancillary to, multi-criteria decision analysis -MCDA (e.g. G€ olc€ uk & Baykaso glu, 2016; Mandic et al., 2015). These trends in the academic literature seem contrary to the methodological principles of SPSM, but the rigour and credibility of published research insights is rarely called into question.
Against this background, this paper's contribution is methodological in nature, and has two aims.
(1) To develop a unified analytical view on SPSM by comparing and contrasting procedural and algebraic features, across various SPSM techniques, that are currently underplayed. (2) To enable a clearer positioning of individual SPSM techniques and their applicability in supporting challenging managerial decisions, as intended by the foundational SPSM literature. These aims are achieved by addressing the following research questions: RQ1: What methodological building blocks justify separate SPSM research strands? RQ2: How transparent and consistent is the implementation of these building blocks?
which turns out to provide an ideal context for the application of SPSM. Supply chain risk management has gained renewed attention from the general public in the wake of the COVID-19 pandemic. Furthermore its focus has evolved from simply listing adverse events that organisations need to worry about (e.g. Olson & Wu, 2010), to addressing many of the complexities that arise from risk interdependency (WEF, 2018). The remainder of this article is set out as follows. Section 2 compares and contrasts selected individual techniques analytically, and proposes a unifying methodological perspective on SPSM. Section 3 reviews selected applications in SCRM, illustrating key insights from the proposed unifying view with evidence from the extant literature. Findings are then discussed in Section 4, which elaborates on some theoretical as well as practical implications of the analysis. The closing section summarises the contribution and limitations of this research.

Comparative assessment of SPSM techniques
In this section we elevate the methodological building blocks of SPSM, with a view to identifying shared computational principles and procedures. These building blocks include a) contextual relationships; b) characteristic equations; c) visual analytics and d) expert engagement.
It is a common requirement across the selected techniques to elicit expert judgment about (1) the constituent elements of a problem situation (henceforth just 'elements'), and (2) the contextual relationshipsperceived or factualbetween these elements. These contextual relationships are specified by expert respondents in the form of a 'structural analysis matrix' (Godet, 1986) or, equivalently, a 'relational map' (Warfield, 1982). Regardless of the specific technique used, the structural analysis matrix and relational map thus obtained are further processed as a single mathematical object: a directed graph (digraph). Inevitably, the following comparative analysis refers to wellestablished principles of graph theory and matrix algebra.

Semi-quantitative contextual relationships
In SPSM a complex problem situation is typically broken down into relevant constituent elements. Popular categories include barriers, enablers, or success factors in the adoption of technologies (e.g. Chaudhary & Suri, 2021;Rajesh, 2017) and managerial practices (e.g. Dasaklis & Pappis, 2018;Sen et al., 2018). Problem elements may also resemble generic 'variables' e.g. epidemiological features at play in a pandemic (e.g. Lakshmi Priyadarsini & Suresh, 2020); suppliers features (e.g. Mohammed, 2020); or individual risks affecting a supply chain (e.g. . The choice of problem elements (barriers, enablers etc.) does not affect how a given SPSM technique works. Yet choosing a relationship statement that is contextually significant for the inquiry can have major analytical repercussions (Malone, 1975). Commonly employed contextual relationships include (1) influence (e.g. "A helps to achieve/leads to B"), and (2) comparison (e.g. "A is more relevant than B"). The first kind of relationship generates intent structures, but the second generates priority structures (Warfield, 1982). For example, given a comparative relationship about age, it is unnecessary to evaluate whether "A is as old as B" if this can be inferred from "A is twice the age of C", and "C is half the age of B"an example of consistency. By asymmetry, one also infers automatically that e.g. "it is not the case that B is older than A". Comparative relationships are sporadically assessed in SPSM applications (Janes, 1988;Malone, 1975) but are prevalent in the context of MCDA, where they are leveraged to attain greater parsimony and reduce the cognitive burden for the decision maker. Examples include improvements in MCDA techniques such as the Analytical Hierarchy Process (AHP)e.g. Abastante et al. (2018). In the case of SPSM, where relationships of influence prevail, there are fewer opportunities for automated inference as one cannot assume a priori properties such as consistency and symmetry.
By specifying a set of contextual relationships, the problem elements identified within the relevant situational context are weaved together into a digraph, whose adjacency matrix enables further computations. The adjacency matrix of a digraph with n vertices is a matrix of size n Â n, denoted here as G ¼ g ij ½ , with generic entry g ij ¼ 1 if there is an edge from node i to node j, and g ij ¼ 0 otherwise (Deo, 1974: Ch. 9). In the context of SPSM, g ij ¼ 1 will typically mean that, in the respondent's opinion, problem element i exerts a direct influence on problem element j: Often, a subjective evaluation of the strength of the relationships identified is also required. This process generates a scoring matrix Xalso of size n Â nwhose entry x ij is either zero or some value on a given scale. When scores are expressed on a semi-numerical scale, they can be ordered, but no specific quantity is associated with the difference between consecutive values (Multon & Coleman, 2010).
From now on, the term 'structural analysis matrix' is used interchangeably for the scoring matrix X and the adjacency matrix G, as these are related. Knowing X, the corresponding entries in the adjacency matrix can be obtained: Unlike survey research, in SPSM there is no standard approach to filling a scoring matrix X: Even within a given technique the adopted scales varyexamples include DEMATEL (e.g. Fontela & Gabus, 1974b;Hsieh et al., 2016); MICMAC (e.g. Godet, 1986Godet, , 2007; and ISM (e.g. Gothwal & Raj, 2017;Warfield, 1982). One could argue that the algebraic analysis of subjective semi-numerical values generates numerical outcomes ex nihiloout of thin air. Yet SPSM emphasises the topological information conveyed, rather than the numerical values per se. Some challenges of combining linguistic and numerical elements are addressed, through fuzzy set theoretic methods, in each SPSM technique (e.g. Ragade, 1976, Villacorta et al., 2014Wu et al., 2017).

Comparative algebraic insights
It is a normative assumption that ISM, MICMAC and DEMATEL differ fundamentally in their computations, thus justifying separate strands of research for each (e.g. Gardas et al., 2019). In this section we argue against this commonly held view using analytical insights. To ease the comparison, Figure 1 and Table 2 summarise the key equations for each technique, with Supplementary Materials S1 providing an illustrative example.
The equations highlighted in Table 2 and Figure  1 have a common aim: to generate insights beyond the contextual relationships elicited from experts, which would be difficult to grasp without analytical support (Bolaños et al., 2005). The below comparison further investigates how specific techniques attain this shared aim.

Consecutive matrix powers: MICMAC
A key algebraic device for revealing higher-order interactions in the context of SPSM is to raise a structural analysis matrix to consecutive powers. Techniques such as MICMAC exploit this fact to rank individual problem elements based on the sum totals obtained along the corresponding rows and columns of a powered matrix. The underpinning assumption is that these powers converge to some stable value that can be used to obtain such a ranking (Duperrin & Godet, 1973;Godet, 1977Godet, , 1986Godet, , 2007.
The key intuition beneath this approach is a wellknown result in graph theorynamely, the matrix obtained by raising an adjacency matrix G to some integer power p T ¼ G p ðp ¼ 2, 3, :::Þ (2) has a generic entry t ij that corresponds to the number of different paths of length p originating in node i and terminating in node j of the corresponding digraph (Deo, 1974: p. 161). When applied in the context of MICMAC, Equation 2 measures the importance of a given problem element by the existence, number and length of the paths that link such an element with the others. Metrics of influence for each problem element are given by the row-sum vector r ¼ T1, and metrics of dependence by the column-sum vector d ¼ 1 0 T (where 1 denotes a unity vector of appropriate dimensions, and 1 0 its transpose). If combined, these values provide coordinates for visualising the problem elements as a scatterplot on an "influence/dependence" Cartesian plane. Yet the MICMAC approach just described has some shortcomings, which are rarely noticed. First, it is assumed without proof that there is a value p Ã , producing a matrix T Ã ¼ G p Ã such that the ranking of the entries in r Ã ¼ T Ã 1 and d Ã ¼ 1 0 T Ã remain stable across consecutive iterationse.g. Godet (1986Godet ( , 1977. Second, it is not always clear if the computations apply to a binary or to a semi-numerical matrix. Recent work, even if methodology-oriented, rarely acknowledges these limitations (e.g. Hachicha & Elmsalmi, 2014;Manzano-Sol ıs et al., 2019;Villacorta et al., 2014;Zhao et al., 2020). Exceptions include Georgantzas and Hessel (1995), who point out that, depending on the presence of cyclical paths in the underlying digraph, the matrix powers in Equation 2 may vanish rather than settle. Saaty (2010: Ch. 5) addresses a similar issue, although in the adjacent context of MCDA. Yet these insights have rarely led to the introduction of additional checks and balances in MICMAC research.

Series of matrix powers: DEMATEL
The DEMATEL technique shares with MICMAC the concepts of total influence and dependence as the chief metrics to achieve a categorisation of interrelated problem elements (Fontela & Gabus, 1974a). Yet the computational strategy for obtaining these metrics is a power series (Fontela & Gabus, 1974b: Ch.1): Where A ¼ kX is the normalised matrix of seminumeric scores X; k ¼ 1=maxðX1Þ is the reciprocal of its largest row-sum; 1 and I are, respectively, a unity vector and an identity matrix of appropriate size; and the exponent À1 denotes matrix inversion.
Typically, DEMATEL applications refer to Equation 3 without alterations, at times misreporting it (e.g. Ethirajan et al., 2021;Yazdani et al., 2019). Seldom is it emphasised that the normalisation that generates matrix A is designed to guarantee the existence of T Ã : This becomes clearer as one notices that Equation 3 is equivalent to multiplying A by both sides of the following expression (Waugh, 1950): Equation (4) is well known in economics, a field familiar to the founders of DEMATEL (Pulido et al., 2008). In such context, A represents an interrelated system of industries, whose viability depends on the conditions under which the power series converges to the inverse matrix I À A ð Þ À1 : One such condition is that A p must decrease and eventually vanishi.e. there is some value p Ã such that A p ¼ 0 for all p ! p Ã : In the context of DEMATEL, this intuition has been rephrased in non-mathematical terms as the 'decreasing importance' of a problem's indirect influence (Fontela & Gabus, 1974b). Waugh (1950) demonstrates that this condition is met if the elements of A are such that their column-sum is less than one for all columns jin which case, the matrix norm is N A ð Þ ¼ max j P i a ij < 1, and no element of a matrix can be larger than its norm. Suh and Heijungs (2007) consider the case where A does not meet the requirement N A ð Þ < 1 due to e.g. mixed units such as those used to express physical flows between supply chain operations. In this case the power series in Equation 4 converges if the dominant eigenvalue k max of A is less than one in modulus, a condition met by doubly-normalising A using its on-diagonal elements, if any, and a rescaling factor 1= k max j j: In special cases, knowledge about the eigenvalues of a non-negative structural analysis matrix A of size n Â n is sufficient to conclude whether higher powers of such a matrix approach a limiting state or vanish. Strang (1986) illustrates this result assuming that A has n linearly independent eigenvectors and n distinct eigenvalues, in which case for p ! 1 the power A p approaches 0 if and only if k i j j < 1 for all i ¼ 1, :::, n (a stronger condition than the requirement on the matrix norm previously described).
In the DEMATEL context, the literature does not build on the above insights to support its choice of normalisation factors (e.g. G€ olc€ uk & Baykaso glu, 2016).

Linking consecutive matrix powers to series: ISM
ISM differs from the previous techniques as for the most part it consists of a graph partitioning algorithm whose aim is to lay bare a 'backbone' of the original digraph that contains fewer edges and is organised hierarchically, hence is easier to interpret for the experts (Warfield, 1974(Warfield, , 1976. A schematic summary of the algorithm is provided in Supplementary Materials S2. The algorithmic aspects iRj denotes that i is in a contextual relationship with j (relationship types: Section 2) where both i and j are vertices in a digraph Structural analysis matrix (1) x is a score on a semi-numerical scale, typically study-specific. If x only takes value 1 then from vertex i and reaching to j Optimal matrix of indirect connections (4,5) Convergence of the series depends on largest eigenvalue.
The symbol # denotes the binary transformation (i.e. gives 1 if the transformed value is greater than zero) Total dependence: column-sum (6) Higher scoring items classified as 'dependent' Total influence: row-sum (6) Higher scoring items classified as 'influential' Total intensity of the problem Net position of the problem: r À c ¼ y Positive: mainly influencing; negative: mainly dependent Hierarchical, minimum-edge digraph with l levels that preserves the reachability of the original digraph.
M c : reachability matrix after fusing nodes in strong components Notes: (1) If k experts fill the matrix independently, then X k denotes the scores submitted by the k-th expert; (2) As given in early DEMATEL works; (3) The notation g (4) It in unclear form early MICMAC whether X ¼ G and, if not, whether normalisation is required; (5) More details on the power of matrices are given in Section 2; (6) Notice that MICMAC and DEMATEL arrive at 1 is a column unity vector of appropriate size; 1 0 is a transposed column unity vector of appropriate size.
of ISM pose distinct methodological challenges, which are discussed in a separate section. For continuity with the previous sections, here we highlight how the starting point for ISM is analogous to the end-results for techniques such as DEMATEL and MICMAC. Specifically, the concept of a 'reachability matrix' in ISM is the counterpart of the matrix of total interactions in Equation 2 and 3. The reachability matrix, too, is the result of consecutive matrix powers that are assumed to settle to a limiting value. Yet in ISM the structural analysis matrix G is typically binary and, before being powered, a suitably sized identity matrix I is added to it, yielding: where the addition is Boolean since # denotes the operation described in Equation 1. The matrix power B p ¼ # I þ G ð Þ p is also obtained by Boolean operations. It is assumed that some integer value p Ã can be found such that (Malone, 1975): where matrix inequalities apply entry-by-entry. In practice, p Ã is replaced by its upper bound p Ã n À 1, which corresponds to the longest distinct path between any pair of nodes in a digraph with n nodes (Warfield, 1973a): Most applications of ISM refer to Equations 5 and 6, usually without mentioning Equation 7. Yet the literature is favourably inclined towards a streamlined approach to determining the reachability matrix, in which the original equations are replaced by manual 'transitivity checks' performed by the researcher without the aid of a computer (e.g. Sushil, 2017Sushil, , 2018. In this context, researchers rarely develop equations that are comparable with other SPSM techniques. To bridge this gap we expand the generic matrix power term in Equation  6 with the aid of Theorem 5.7 in Harary et al. (1965): Recalling Equation 7, the reachability matrixinitially defined by consecutive matrix powerscan be expressed as a finite sum of matrix powers: Whilst the consecutive powers in Equation 6 are reminiscent of MICMAC, Equation 9 is closer to the fundamental DEMATEL equationshedding some light on how the two may be related. As in DEMATEL, it seems sensible to require that G p vanishes after some value p Ã n À 1, so that the righthand side of Equation 8 converges to the reachability matrix. This condition is met when the underlying digraph does not contain any directed edge sequence of length p Ã or larger (Deo, 1974, p. 232). This approach replaces taking the limit of a finite sum of matrix powersas Equation 4 doessince G is a binary matrix.
It is rarely noticed that the same condition described above, if met, prevents techniques such as MICMAC from yielding meaningful results as the matrix power in Equation 2 vanishes.

Reconciliation of SPSM matrix equations
With reference to the shared use of matrix powers as a computational device, we suggest that MICMAC, ISM and DEMATEL build progressively on each other. Matrix powers are unrelated in MICMAC; but combined as a finite sum in ISM, and as a series (infinite sum) in DEMATEL. This progression is emphasised in the middle portion of Figure 1.
We also notice a progressive refinement of assumptions regarding the behaviour of higher matrix powers. MICMAC is vague on whether these powers settle or vanish. ISM overcomes these limitations in the case of a binary matrix, and introduces an upper bound on the exponent. ISM and DEMATEL share the requirement that higher powers of a structural analysis matrix do vanish, which is detrimental for MICMAC. In all cases this behaviour depends on the presence of paths beyond a certain length in the underpinning digraph. For DEMATEL, the additional requirement of normalisation provides useful diagnostics for the behaviour, in the limit, of higher matrix powers.
Conceptually, the finite sum of matrix powers in ISM, and the infinite sum in DEMATEL (Equations 4 and 9) can be reconciled through the inequality: ð Þ , the right-hand term of Equation 9, even though these may not be equivalent. Recalling ð Þ is the adjacency matrix of a 'transitive closure' of a digraph with reachability matrix T Ã (Harary et al., 1965), and that A þ A 2 þ A 3 þ ::: ¼ A I À A ð Þ À1 , one obtains: Equations 10 and 11 help relate the ISM concept of a reachability matrixrepresented by a finite sum of matrix powerswith the DEMATEL concept of a total interaction matrixrepresented by a matrix inverse to which an infinite sum of matrix powers converges.
At the conceptual level, the suggested relationship can be strengthened if the right-hand side of Equation 10 is turned into an equivalence invoking the Cayley-Hamilton theorema well-known result in linear algebra (Pal & Bhunia, 2015, chap. 3). With reference to the inequality Equation 10, the theorem warrants that the inverse on the right-hand side can be cast into a finite sum containing up to the n À 1 ð Þ th power of The unknowns in this problem are the scalars b i : If C has n distinct eigenvalues k 1 , :::, k n , one obtains these unknown scalars by solving the following (Lathi, 2002, p.62 After obtaining b i through Equation 13 one substitutes back I À A ð Þ for C in Equation 12, and works out the scalars c i that multiply A in the expression P nÀ1 i¼0 c i A i ¼ I À A ð Þ À1thus establishing an equivalence between a finite sum of powers of A and the inverse I À A ð Þ À1 : In the context of SPSM, this distinction is often underplayed, generating some confusing notation (e.g. Ethirajan et al., 2021;Yazdani et al., 2019).

Matrix powers in fuzzy SPSM approaches
So far we have assumed that key SPSM equations were as defined in the foundational literature. Yet a growing number of applications in the literature use fuzzy structural analysis matrices, meaning that experts score the strength of a relationship using degrees of membership on a scale defined by extremes (1-0) instead of discrete values. Another approach is to use interval-type ('grey') matrices (e.g. Ethirajan et al., 2021). The algebra of fuzzy SPSM approaches differs from the general case examined so far, since the matrix (dot) product is replaced by max-min, or other compositions. Ragade (1976) illustrates these compositions in the case of fuzzy ISM. It is still a requirement that the powers of the underpinning fuzzy matrix converge to a limiting valuea condition that is often assumed to occur (e.g. Zhao et al., 2020). Thomason (1977) demonstrates that such powers may oscillate rather than converge, and that convergence may be subject to specific conditions on the entries of the fuzzy matrix F ¼ ½f ij i.e. that for any pair of problem elements i, j there is k such that f ij f ik f kj :

Comparison of visual analytics
The algebraic insights discussed above are used to develop visual analytics that are fed back to practitioners for interpretive analysis, collective learning, and group decisions. This idea is schematically illustrated in Fig.1 as one progresses towards the righthand side, and through the example in Supplementary Materials S1. Below we identify two approaches, one of which requires further computations.
2.3.1. The influence/dependence plane approach Techniques such as MICMAC and DEMATEL have a shared approach to visual analytics, although the underpinning calculations differas previously noticed. In both cases, the constituent elements of a problem situation are visualised as a scatterplot on an 'influence/dependence' Cartesian plane. The coordinates of each element on the plane are obtained from the influence/dependence vectors d Ã and r Ã described in Sections 2.2.1 and 2.2.2. Once a scatterplot is obtained, the problem elements are segmented based on the pre-defined portion of the plane in which they fall.
In the specific case of MICMAC the 'influence/ dependence' plane has four quadrants associated with the following segmentation (see Godet, 1986, p.153): (1) 'influential' elements (upper-left quadrant); (2) 'linkage/relay' elements, which are unsteady (upper-right quadrant); 3) 'dependent' elements (lower right quadrant); and 4) 'autonomous' elements unlikely to play a role in future developments (lower left quadrant). An L-shaped plot on the influence/dependence plane denotes stability (Godet, 2007, p.173). This schematic proved to be popular in the ISM literature, which uses the term MICMAC improperly, as a synecdoche for this visualisation device. The DEMATEL scatterplot has only two quadrants (top/bottom), and its coordinate system requires that the influence/dependence vectors are turned into combined measures of influence and dependence. Specifically, the ordinate y ¼ r Ã À d Ã indicates the 'net position' of an element: elements located in the top half (bottom half) of the plane are deemed highly influential (highly dependent) and classified as predominantly 'dispatcher' ('receiver'). The abscissa x ¼ d Ã þ r Ã is a proxy for 'total intensity', so that the elements on the righthand side of the plane have greater overall importance. This system of coordinates, originally devised by Fontela and Gabus (1974b) has remained substantially unchanged (e.g. Ethirajan et al., 2021; G€ olc€ uk & Baykaso glu, 2016).

The graph partitioning approach (ISM)
The second approach to visual analytics in SPSM is a minimum-edge, hierarchical digrapha 'backbone' or 'skeleton'which is characteristic of the ISM approach. This backbone is obtained through a partitioning algorithm (described in Supplementary Materials S2) which groups strongly connected problem elements, and re-arranges these groups by hierarchical levels (Warfield, 1973b). Similarly to the scatterplots described above, highly influential problem elements (shown at the bottom of the hierarchy) are separated from highly dependent or resultant elements (shown at the top).
As mentioned in Section 2.2.3, some methodological issues associated with this approach are substantially overlooked by the literature. One such issue stands out: the significant overlap with the joint problemswell known in computer scienceof finding strongly connected components in a digraph (Deo, 1974) and a block-triangular permutation of its adjacency matrix (Strang, 1986: Ch.16).
The original ISM algorithm was developed before personal computing became commonplace (Warfield, 1974(Warfield, , 1976, which favoured manual implementation over automation (Farris & Sage, 1975). Yet the extant ISM literature continues to replicate -almost without exception (e.g. Babu et al., 2021;Sushil, 2017)the same manual steps illustrated by Warfield (1973b). Many observers fail to notice that these steps could be vastly simplified if the strongly connected components in the relevant digraph were initially identified by e.g. Depth-First Search -DFS (Deo, 1974: p. 302), a process that generates the required block-triangular permutation of the corresponding adjacency matrix almost as a by-product. The implementation of DFS for the identification of 'strongly connected' component is now a standard capability in network analysis software.
A second issue is that entire parts of the ISM algorithm are dismissed in the literature. For example, hardly any ISM application explicitly computes the so-called 'skeleton' matrix for the minimum-edge digraph, as originally intended by Warfield (1974;. Overall, attempts to advance the ISM partitioning algorithm remain sparse (e.g. Kim & Watada, 2009).
A third and final issue is that the literature rarely acknowledges that the ISM algorithm fails to apply to a reachability matrix filled with onesan indicator that any node can be reached from any other node, thus defeating the rationale for partitioning a digraph (Warfield, 1973b). This feature is exacerbated by concerns about how the reachability matrix is usually computed, which were expressed in Section 2.2.3.

Elicitation of expert judgment
Concepts such as post-normal science recognise the challenges of comprehending and managing complex situations in the absence of a theoretical basis for factual predictions (Funtowicz & Ravetz, 1993). The original intent of SPSM is to address similar challenges, through a disciplined approach to expert judgment and intuition leading to relational maps and structural analysis matrices.
In principle, a range of approaches can be adopted to help individuals contribute their judgment, intuition, and creativity in participative SPSM activities (e.g. Lendaris, 1979). In practice, the experts go through a pre-established list of questions for each pair of constituent problem elements previously identified. These questions may differcompare e.g. Godet (1977Godet ( , 1986 and Saxena et al. (1990). The latter introduces the concept of selfinteraction matrixa widely used instrument in extant ISM researchby which an experts score nðn À 1Þ contextual relationships in nðnÀ1Þ=2 evaluation steps, each involving a four-question checklist.
It can be challenging to assess the specific benefits of a given mode of engagement in terms of reducing the cognitive burden for decision makers (e.g. Kolfschoten et al., 2014). In the adjacent field of AHP, research has explored 'parsimonious' approaches centred on the decision maker, which reduce the number of paired comparisons required in practical applications (e.g. Abastante et al., 2018). As mentioned in Section 2.1, the conditions to infer comparative relationships in MCDA may not hold for the influence relationships that are prevalent in SPSM.
The growing ambition of extant SPSM literature to resemble survey research corresponds to a general loss of interest in the cognitive effort required by decision makers, and in human-machine interaction as a way to build consensus through structured dialogue (e.g. Sushil, 2018). Yet few works estimate such effort with time-related metricssome that do are summarised in Table 3.
These works do not specify how the estimated time is allocated i.e. interaction with computers, processing etc. It is also unclear whether the time estimates provided account for human-computer interaction. Yet early ISM research was more prescriptive about the use of computers in facilitated group work (Janes, 1988).
Specifically, early ISM work sets out a humanmachine interactive environment to elicit subjective judgment on contextual relationships (Malone, 1975). Warfield (1982; illustrates such an environment as consisting of: (1) the individuals involved and their perception of the problem situation; (2) the software and hardware embodying the necessary methodological steps; and (3) the relevant information dealt with (i.e. substantive content). Early ISM work also aimed to support group learning, with benefits accruing not only from the models generated, but also from partaking in the process (Warfield, 1982). Warfield and C ardenas (1994) further develop the above principles through the concept of 'interactive management'.
Unlike early ISM, most SPSM approaches are not prescriptive on how experts should be engaged. For example, MICMAC encourages seeking a plurality of viewpoints using intuitive means, brainstorming, and unstructured interviews with relevant stakeholders (Godet, 1986)but is elusive on how to do so. DEMATEL has resembled survey research since the outset, allowing experts to separately complete and submit their judgment via questionnaires. In this context, interaction with computers is limited to the analysis of these questionnaires, as it brings " … some order into the apparent chaos of thought" (Fontela & Gabus, 1974a). Few applications have explored the overlaps with rigorous case study research and discursive processes (Bolaños et al., 2005;Kwak et al., 2018).     sustainability; RS -resilience; VAXO -shorthand for the filling approach introduced by Saxena et al. (1990).
Notes: Ã Not a journal paper; (1) The term Delphi is used but without procedural details; (2) Deducted in the absence of procedural details; (3) Ordinal but associated with real values e.g.

Illustrative applications of SPSM to SCRM
This section addresses RQ2 by illustrating evidence from the SCRM literature that substantiates claims made in previous sections. A sample of the literature was obtained by querying Web of Science for abstracts/keywords containing the terms (DEMATEL OR MICMAC OR ISM OR "interpret Ã structural model Ã ") AND (risk OR resilien Ã ) AND (SUPPLY CHAIN). This search yielded 112 journal papers. We excluded papers that (1) were deemed not pertinent based on closer examination of abstract and title; (2) did not disclose sufficient analytical details; or (3) were published in journals that are 'author-pays' only (as the rigour of this publishing approach is debated). Given the illustrative aim of this section, the selection process reached saturation with fewer papers than a systematic review. The final sample consists of 50 references, four of which are not journal papers. Figures 2, 3 and Table  4 illustrate the sample and the proposed evaluation grid.

Constituent problem elements and contextual relationships
Conceptually, it is often recommended that risks should be regarded as interconnected rather than standalone (e.g. Chopra & Sodhi, 2004). However, the literature continues to focus on individual risks as opposed to risk interaction analysis using techniques such as SPSM (Kwak et al., 2018). Based on the 50 selected references (highlighted in Table 4), experienced practitioners typically help identify an arbitrary number of risks, as well as possible contextual relationships between them. In 78% of cases, experts were asked to evaluate contextual relationships between 18 risk items or fewer (Figure 3, left-hand side), with a clear prevalence of 'influences' ($48%) and 'leads to/helps achieve' ($32%) type of relationships (Figure 3, righthand side).
In 54% of cases, experts scored the intensity of the identified contextual relationships (22% by a fuzzy scale), but rarely outside DEMATEL applications. Table 4 specifies alternatives to single-valued seminumerical scores. Wu et al. (2017) illustrate a simultaneous application of two such approaches. In other (fewer) cases, experts also assessed the probability that risk events occur (e.g. Bañuls et al., 2017).

Expert engagement
Less than half of the reviewed papers specify how experts were engaged to elicit contextual relationships. Formalised techniques include Delphi and/or focus groups (e.g. Han et al., 2019;Kwak et al., 2018); workshops (e.g. Faisal et al., 2007); and case studies (e.g. Pfohl et al., 2011). Often, the number of experts involved in identifying relevant risks differs compared to those involved in evaluating contextual relationships; for example the former may involve fully-fledged surveys. Works that assume a collective response are not often specific on how consensus among experts on paired risk assessments is arrived at. In only two cases is a voting system explicitly adopted (Alawamleh & Popplewell, 2011;Han et al., 2019). Where individual responses are sought instead, the averaging approach is used with few exceptions (an example of such exceptions is Song et al., 2017).

Algebraic and algorithmic aspects
As outlined in Section 2, the computational structures of MICMAC and ISM, if correctly applied, are incongruent; whereas ISM and DEMATEL are treated as mutually exclusive, despite their affinity. Yet 44% of cases considered here apply ISM and MICMAC in combination, while just 2% claim to combine ISM and DEMATEL. Only half of the reviewed papers disclose some equations, which reduces to less than 10% in the case of ISM-MICMAC combined. In hardly any cases does the ISM literature go beyond recalling some standard expression for the reachability matrix (e.g. Pfohl et al., 2011;Wu et al., 2015). Algebraic or algorithmic insights are often replaced by prose. For example, conceptual descriptions of the reachability matrix in ISM have little to do with its analytical derivation, examined in Section 2, and the underlying operations are manually implemented. Only one work (Hachicha & Elmsalmi, 2014) refers correctly to the original MICMAC algorithm. Applications of DEMATEL, on the other hand, are more likely to credit and disclose key equations. Some papers hint at the power series approximation of the inverse matrix, but without elaborating on the conditions for convergence (e.g. Ethirajan et al., 2021;Song et al., 2017).

Visualisation and human-computer interaction
The extant literature applies the conventional visualisations discussed in Section 2.3 without alteration. However, when MICMAC is implemented jointly with ISM, it is stripped of its characteristic computational aspects, and reduced to a four-quadrant visual categorisation procedure. In other cases the same treatment is used with ISM's characteristic minimum-edge digraph (e.g. Vishnu et al., 2019). Across the reviewed cases SPSM software tools are hardly ever deployed for computational and visualisation purposes (e.g. Hachicha & Elmsalmi, 2014). Most ISM work employs a convention for matrixfilling that requires no computer assistance, first introduced by Saxena et al. (1990) and denoted as 'VAXO' in Table 4.

Discussion
The idea of structuring complex problems as a system has been around for decades; the general intent being to ease managers' sense of helplessness, lack of confidence, and inability to take responsibility in the face of complexity (e.g. Ackoff, 1974;Senge, 2006). In the ongoing debate on whether rational analysis alone is sufficient in the face of complexity, SPSM adopts a hybrid stance. Like soft OR, it embodies a disciplined attitude towards complexity, and places considerable emphasis on problem structuring. Like hard OR, it acknowledges the limitations of prose as an alternative to rational analysis. The research presented in this paper compares and contrasts widely applied SPSM techniques through a methodological lens. The findings highlight algebraic and procedural aspects that are often taken for granted, as researchers now focus on specific application contexts. Our research shows that these aspects, whilst overlooked, affect the ability to impart a meaningful and sound relational structure on complex problem situations as perceived by experienced practitioners. Table 5 summarises key insights in response to RQ1 and RQ2. These are discussed below.

Highlights from the methodological comparison
The most prominent building block in SPSM is reliance on structural analysis matrices (either binary, semi-numeric, or fuzzy), with graph-theoretical interpretations to capture influence-type contextual relationships. However, for techniques such as MICMAC it is unclear what checks and balances should be in place while filling such matrices, to ensure that later computations based on the matrices work out as desired. These checks and balances are more prescriptively defined in adjacent MCDA techniques such as AHP (for an overview, see Marttunen et al., 2017).
Our findings emphasise the importance of matrix powering operations as the common algebraic device which enables SPSM to generate insights that practitioners can interpret and act upon. Although they are often overlooked, the differing assumptions about how the powers of a structural analysis matrix behave offer an invaluable lens to identify similarities and differences between individual strands of research. As an example, the reachability matrix in ISM and the matrix of 'total' effects in DEMATEL are, in fact, analogous. Yet, reachability matrix equations are rarely developed in full (e.g. Li et al., 2019). Even work that jointly applies DEMATEL and ISM fails to recognise analogies between the two methods (e.g. Gardas et al., 2019;Liu et al., 2021). Discussions around apparent differences are usually hastily compiled and lack methodological depth (e.g. Ethirajan et al., 2021;Vishnu et al., 2019;Zhao et al., 2020).
Our research also shows that MICMAC appears to borrow a key assumption about the convergence of powers of a structural analysis matrix from paired comparison theory, which is actually concerned with priority rather than intent structures (see e.g. Kendall, 1955;Saaty, 1987). Unlike a paired comparison approach, however, MICMAC fails to guarantee that the conditions for the powered matrix working properly are always met. Our findings also promote the standard eigenvalue problem as a common theoretical foundation to address the issue of guaranteed convergence for powers of a structural analysis matrix, in turn also providing useful diagnostics.
All three of the SPSM techniques develop characteristic visual analytics for interpretive purposes. Our findings highlight that apparently unrelated visualisation approachessuch as 'influence/ dependence' scatterplots (MICMAC, DEMATEL) and minimum-edge hierarchical digraphs (ISM)are actually built on similar computational grounds. To correctly complement each other, however, the computational analogies or incongruences between the different approaches need to be recognised and addressed. For example, the term 'MICMAC' is often just a synecdoche, denoting the use of its 2by-2 visualisation device within ISM applications. The literature also fails to recognise that a major portion of the ISM algorithmcurrently laid out manually without computer aid in most papersis equivalent to the identification of connected components in a digraph. This is a task that any network analysis software can effectively automate.
Regarding the preferred mode of engagement with experts, our research observed that survey-like approachesas opposed to facilitated group learningare now prevalent. Unlike survey research that is aimed at inductive generalisations, composing expert responses in an SPSM context can be a challenge (e.g. Fontela & Gabus, 1974b). Yet, the analysis of statistical significance in combining independent SPSM responses has barely advanced (e.g. Shieh & Wu, 2016). Even when group consensus replaces individual responses, the SPSM literature rarely discloses how it was reached, and whether human-computer interaction helped reduce the cognitive burden associated with the process. These overlooked areas of SPSM have received more attention elsewhere in the literature (e.g. Abastante et al., 2018;Kolfschoten et al., 2014), but few works place much emphasis on the design and deployment of digital tools to facilitate the task of engaging with the decision maker (e.g. Manzano-Sol ıs et al., 2019; Settanni et al., 2018). The incumbent SPSM literature seems reticent to deploy specialised software tools, despite the availability of free resources (for ISM: www.jnwarfield.com/; and for MICMAC: en.laprospective.fr/). This is no coincidence. Widely implemented approaches such as 'total' ISM (Sushil, 2017(Sushil, , 2018 regard the use of computers as optional, and encourage prose instead. This appears to be a departure from the original intent of structural thinking (e.g. Warfield & Staley, 1996).

Practical and theoretical implications
From a practical perspective, our research calls into question the justifiability of ISM, DEMATEL and MICMAC as separate research strands; mainly on the grounds of similarities and differences concerning the respective computational structures. In the past, ISM would differ from DEMATEL and MICMAC due to its idiosyncratic approach to expert engagement, facilitated by human-computer interaction. Today that distinction appears hardly justifiable, considering how ISM research now underplays the automation of computational as well as expert engagement tasks. At first glance, DEMATEL and MICMAC appear to differ in terms of fundamental equations. But that difference is most likely due to the positioning of MICMAC halfheartedly between AHP and DEMATEL, without the methodological checks and balances of either.
From a theoretical perspective, one cannot help but notice how SPSM applications have mutated into 'shortcut' surrogates for survey research; thereby losing much of the original intent to support challenging managerial decisions in the face of complexity. A decision maker-centric approach is the exception rather than the rule in the extant academic literature. A further key aspect of SPSM often neglected today is that the benefits of the approach accrue not only from the analytical models that are generated, but also from participating in the process in itself (Warfield, 1982). In this context, it is worth noting that the foundational SPSM principles were developed at a time when cognitive biases and simplifying heuristics in human judgment were relatively unexplored (for an early overview see Schwenk, 1985).
These notions are now well established and being further developed in disciplines such as Behavioural Operations Research (Kunc, 2020). Looking back on the original intent and methodological principles behind SPSM, as this paper does, creates an opportunity to appreciate the merits of raising awareness of the limitations of human-bounded rationality in the face of complexity; at the same time promoting rigour, coherence and dialogue in the collective reflection (Fontela & Gabus, 1974b;Godet, 1986;Janes, 1988).

Concluding remarks
This paper compares and contrasts SPSM techniques (ISM, DEMATEL, MICMAC) that are considered a staple in the business and management literature, focusing in particular on methodology. As such this research is the first of its kind, and a major departure from the normative view in the extant literature, which rarely aims to advance or critically evaluate the techniques. Instead the extant literature focuses on specific applications contexts (e.g. technology adoption, risk, sustainable managerial practices, supplier selection) and constituent problem elements (e.g. barriers, enablers, risks).
The comparative evaluation presented in this paper offers a unifying view across SPSM techniques, which has never been done even though the applications have been in use for decades. Our arguments are developed by taking a closer look at some characteristic procedural and algebraic aspects of SPSM, which are normally taken for granted or underplayed in the literature. Of interest to both practitioners and academics, our findings identify previously unnoticed analogies between techniques that have always been regarded as mutually exclusive. We also raise concerns about possible incongruences between techniques that are often applied jointly. The research emphasises the eigenvalue problem as a common theoretical platform, aiming to raise awareness of its importance for practical diagnostics. This approach helps to determine whether or not a given technique reliably yields the outcome that is hoped for, based on the input provided by experienced practitioners.
Besides these more computational aspects, our findings highlight a lack of rigour in the approaches used to facilitate engagement with experts, which are only rarely assisted by digital tools that seek to leverage human-computer interaction. While there are adjacent academic fields which emphasise the need to reduce the cognitive burden for decision makers, this aspect has gradually lost relevance in the SPSM field. Instead, the literature favours an uncritical application of research templates with a view to achieving 'shortcut' survey surrogates.
It is acknowledged that this research has limitations. First, in order to maintain a reasonable scope it could not feasibly conduct a comprehensive review of four decades of literature across three well-established techniques. Instead, the paper's claims are substantiated based on an in-depth analysis of models and equations for a subset of relevant applications and methodological development work. Second, the aspect of fuzzy set theory applied to SPSM, whilst mentioned in passing, has not been examined in detail. Third, the research does not consider crossovers between MCDA and SPSM.
Despite these limitations, this paper initiates a process of clarifying whether ISM, DEMATEL and MICMAC should be justified as autonomous research strands, a view which is currently widely assumed across the literature. The research challenges the legitimacy of the incumbent view, by providing a clearer, more analytical interpretation of the working requirements for each technique. Furthermore it provides academics and practitioners with the necessary insights and caveats to guide more informed applications of SPSM in the future. This approach of constructive criticism also opens up potential avenues for further research, especially with regard to the development of digital tools to automate and facilitate the process of expert engagement.