Including multiple perspectives in participatory multi-criteria analysis: A framework for investigation

Over the past few decades, a number of participatory multi-criteria analysis methods, combining deliberative procedures with multiple decision criteria assessment techniques, have been developed to tackle complex policy problems. However, several important aspects of such methods, including the way in which different and often contrasting viewpoints should be included in the analysis, appear to have been largely neglected by previous studies. Possible problems and drawbacks that may hamper the applicability and feasibility of multi-actor multi-criteria exercises and the utility and reliability of their outcomes also deserve further investigation. This article seeks to fill this knowledge gap by proposing a conceptual framework and classification scheme that illustrates the different possible approaches for identifying the key elements of the multi-criteria problem (i.e. options, objectives/criteria, weights and scores), while dealing with different points of view. It also discusses the potential advantages, disadvantages and issues of each approach and ultimately defines the overarching factors that should orientate the selection of one specific approach over the others.

However, in recent years, this theme seems to have become ever more pressing. Justifications for the adoption of participatory and deliberative practices and approaches have been provided in policy areas as varied as environment and natural resource management (Owens, 2000;Reed, 2008;Talley et al., 2016), health sector (Church et al., 2002;Graham, 2007), education (Matshe and Pitsoe Pitsoe, 2013), transport and infrastructure planning (Al-Sharari, 2022;Bickerstaff et al., 2002;Ng et al., 2012), technological innovation (Macnaghten et al., 2005) and risk assessment (Webler and Tuler, 2021). As highlighted by Cass (2006), the rise in interest in participation and deliberation has been generated by a wide variety of social, political, scientific, institutional and cultural transformations occurring at an increasing pace since the last part of the 20th century. These changes have primarily led to a turbulent and highly interconnected world, yet characterised by a significant fragmentation of knowledge, expertise, areas of responsibility and decision-making capacity (Dicken, 2011;Koppenjan and Klijn, 2004;Lundestad, 2004) and an increasing multiplicity of value systems, worldviews and identities ( Bar-Yam, 2004;Funtowicz and Ravetz, 1991;Ney, 2009). Representative democracy and authoritative strategies based on the application of single perspectives are argued to be incapable of coping with the complexity of society and problems inherent in it (Funtowicz and Ravetz, 1994;Roberts, 2000;Stirling, 1998). The increased demand for participatory and deliberative processes can also be considered to derive from a general loss of trust in (political and scientific) institutions and leaders across society as a whole (Stern and Fineberg, 1996;Webler and Renn, 1995) and a recognition of basic human rights regarding democracy and procedural justice (Beetham, 1994;Laird, 1993). This latter argument is enshrined in Article 21 of the United Nations Declaration of Human Rights (UN, 1948), establishing citizen participation in public affairs as a fundamental human prerogative, the Aarhus Convention on information and public participation in environmental matters (United Nations Economic Commission for Europe (UNECE), 1998) and other related instruments of national and international law. Some of the contended benefits of participation and deliberation, which however, have often escaped a systematic empirical scrutiny (Burton, 2009;Rowe and Frewer, 2004), comprise the possibility of improving the transparency of the decision-making process, highlighting marginalised perspectives, fostering mutual and interactive understanding between group decision-making participants, identifying common interests and shared values, tackling and addressing differences of opinions promptly and systematically so at to avoid conflicts or their escalation, and ultimately improving the quality of the final decision and strengthening its legitimacy (e.g. Coenen, 2008a;Creighton, 2005;Involve, 2005;Suárez-Herrera, 2009;Weale, 2001). Besides these instrumental advantages associated with the decisions or policies made as a result of more participation, Richardson (1983) stresses the importance of personal developmental benefits and educative elements that attach mainly to the individuals who participate in public debates and group decision-making.
Whereas much previous attention has been given to discussions of the merits of, and conceptual frameworks for, public involvement, extensive research has gradually focussed on the design of effective public participation processes with a strong evaluation component (Abelson et al., 2003) so as to inform assessment by embracing the contribution of participants' perspectives (Burke, 1998;Suárez-Herrera, 2009). A range of techniques and approaches for public involvement, including among others citizens' juries, citizens' panels, in-depth and focus group discussions, planning cells, deliberative polling and consensus conferences, have been developed by government authorities, academics, and consultants (e.g. Coenen, 2008b;New Economics Foundation (NEF), 1999;Renn et al., 1995;Rowe and Frewer, 2005;Slocum, 2003;Van Asselt et al., 2001). Methods differ with respect to specific features such as participant selection, the number of participants, the number of meetings, the information used and the type of output generated. Classification systems, however, differ depending on the different definitions of what counts as a participatory, inclusive or deliberative process or mechanism as well as on the various, and often contrasting, meanings ascribed to key terms such as 'participation' and 'deliberation' (Cass, 2006).
Over the past few decades, studies and research on public participation and stakeholder engagement have also been complemented by increased experimentation with a range of analytical and decision-support methods and tools (Fish et al., 2011;Gerber et al., 2012;Rauschmayer and Wittmer, 2006;Stagl, 2007), including multi-criteria analysis (MCA). Due to their inherent ability to feature different forms of data and information and account for different types of objectives and decision criteria, MCA methods have progressively come to be seen as a sort of natural framework for incorporating multiple stakeholders' interests and priorities into the analysis of a given problem and providing a higher level of structure to stakeholder dialogues (Dean, 2018). Therefore, whereas MCA methods have been originally conceived to be employed only by a few analysts and specialists, with little or no participatory mechanisms included, in the course of time, many arguments have been put forward to go beyond this technocratic approach to the analysis (e.g. Banville et al., 1998;Floc'hlay and Plottu, 1998;Macharis and Bernardini, 2015;Petts and Leach, 2000;Stirling, 2006;te Boveldt et al., 2021;Vari, 1995). Hence, approaches combining participatory and deliberative techniques with (in many cases, simplistic forms of) MCA have appeared in a rather diffuse way, in many planning and policy fields (e.g. Barfod, 2018;Burgess et al., 2007;Cornet et al., 2018aCornet et al., , 2018bDe Brucker et al., 2015;Gregory and Keeney, 1994;Hickman and Dean, 2018;Macharis et al., 2009Macharis et al., , 2012McDowall and Eames, 2007;Munda, 2008;Proctor and Drechsler, 2006;Renn et al., 1993;Stagl, 2006;Stirling and Mayer, 2001;Ward et al., 2016aWard et al., , 2016b.
On one hand, the general principles and presumed benefits of participatory MCA methods have been widely espoused in the literature. Indeed, in line with what argued by studies on participatory and deliberative practices, a multi-actor multi-criteria exercise is generally contended to capture more accurately than analyst-led MCA assessments and other non-participatory appraisal and evaluation approaches the full spectrum of interests and values in dispute. In addition, the inclusion of different points of view in the analysis is also assumed to enhance the clarity of the process and increase the possibility for the legitimation of its outcomes.
On the other hand, to the best of the author's knowledge, it seems that potential shortcomings, which may hamper the applicability of such methods, have not been sufficiently explored by their proponents. Notwithstanding the dramatic growth in popularity experienced by participatory MCA approaches, so far, with a few exceptions (e.g. Cornet et al., 2018b;Dean, 2018Dean, , 2021Dean et al., 2019;Merino-Saum, 2020;Stirling and Mayer, 2001), there seems also to have been a lack of comprehensive analysis and comparative studies on such methods. Important aspects such as the selection of group decision-making participants, the level of involvement of participants in the process and, above all, the way in which different and often contrasting viewpoints should be processed and included in the multi-criteria framework do not seem to have received much attention in the literature. This article seeks to fill this knowledge gap by proposing a conceptual framework and classification scheme with the view to promoting a better understanding and appreciation of the key features and issues of such methods.
The rest of the article is organised as follows. The next section offers an overview of MCA and participatory MCA. The third section discusses the methodology used to collect data and information for this study. The fourth and fifth sections describe and compare the key features of participatory MCA methods, and the different manners in which multi-actor multi-criteria assessment exercises can be undertaken, especially with reference to the inclusion of different points of view in the multi-criteria framework. The sixth section critically discusses potential strengths, weaknesses and issues of these different approaches. Finally, the seventh section includes some concluding remarks.

MCA and participatory MCA methods: An overview
Appraisal and evaluation methods can be classified in several ways (e.g. Faludi and Voogd, 1985;Oliveira and Pinho, 2010;Rogers and Duffy, 2012;Söderbaum, 1998). One of the simplest classification schemes is based on the number of objectives and decision criteria considered in the analysis. From this point of view, it is possible to distinguish between two families of methods, although their boundaries are frequently blurred (Dean, 2020(Dean, , 2022: • • Mono-criterion methods, which assess a given plan against a single and specific objective. This family includes, for instance, cost-benefit analysis (CBA), which assesses a plan primarily against the objective of economic efficiency (as shown by the benefitcost ratio or the net present value of the plan), by translating all impacts into discounted monetary terms. • • Multi-criteria methods, which appraise or evaluate a plan by taking into account (more explicitly than mono-criterion methods) the various dimensions of interest, and the interplay between multiple, often contrasting, objectives and different decision criteria and metrics.
As illustrated in the previous section, there is a large array of different participatory approaches. The same is true for multi-criteria methods. Therefore, contrary to what is commonly believed, MCA does not constitute a single, specific method. Rather, it should be intended as an umbrella term for a number of different techniques and tools by which multiple objectives and decision criteria can be formally incorporated into the analysis of a problem. The works of Kuhn and Tucker (1951) and Charnes and Cooper (e.g. Charnes and Cooper, 1961;Charnes et al., 1955) on goal programming in the field of operational research are generally regarded as one of the major stimuli for the development of this discipline. However, as pointed out by Köksalan et al. (2011), the real roots of MCA are much more ancient, extending to studies of classical economists and mathematicians. Over the decades, the evolution of MCA has been directly or indirectly influenced by research in different areas of study (e.g. utility and value theories, social choice theory, revealed preference theory, game theory, and fuzzy and rough set theories), so that, presently, the realm of MCA comprises many subfields and different schools of thought (Bana e Costa et al., 1997;Figueira et al., 2005a;Köksalan et al., 2011). Since the late 20th century, MCA has progressively gained importance as an appraisal and evaluation approach in a number of fields, including ecology, sustainability and environmental science (Herath and Prato, 2006;Huang et al., 2011;Wang et al., 2009), health care decision-making (Thokala et al., 2016), banking and finance (Aruldoss et al., 2013), urban and regional planning (Nijkamp et al., 1990) and transport project appraisal and evaluation (Macharis and Bernardini, 2015). This increased popularity can be primarily attributed to an ever-greater awareness of the fact that many sustainable development and other complex policy problems facing society have a multi-dimensional nature, imply difficult trade-offs and, therefore, require the careful examination of a variety of different, often conflicting perspectives and aspects (Giampietro, 2003;Munda, 1995Munda, , 2008. The MCA literature is rich with several dozens of different methods and techniques accounting for multiple objectives and decision criteria (for a brief overview of the different approaches, see Dean, 2020 and; whereas more comprehensive examinations of the various MCA methods can be found in Belton and Steward, 2002;Figueira et al., 2005b;Roy, 1996;and Triantaphyllou, 2000). Although these methods can differ even substantially from one another, many of them have certain aspects in common and exhibit a similar decisionsupport framework, which includes the following key elements: • • Option: an alternative course of action proposed to address a perceived problem and achieve an overarching end result. • • Objective: an intended and specific aim against which any proposed option is being assessed. • • Criterion: a specific measurable indicator of the performance of an option in relation to an objective that allows measuring (in a quantitative or qualitative manner) the extent to which an option meets that objective. For instance, the objective of 'promoting economic growth' can be measured through the 'GDP growth rate' criterion. • • Performance Score: a pure number (with no physical meaning), belonging to a given scale (e.g. a 0-1 scale, a 1-100 scale or a −5 to +5 scale), that identifies the performance of an option against a specific objective/criterion. High-performing options are ascribed high scores, whereas low-performing options score lower on the scale. • • Criterion Weight: a coefficient which is commonly and simplistically intended to represent the level of importance of an objective and associated criterion relatively to the other objectives and criteria under consideration. High-importance objectives (and criteria) are identified with high weights.
Typically, in a multi-criteria decision-making problem, one or more project options are assessed against a number of different objectives, for which a set of criteria have been identified. The performances of an option against the various objectives and criteria, which can be assigned different weights, are evaluated by means of scores. Overall, what formally defines a MCA method is the set of rules establishing the nature of options, objectives, criteria, scores and weights as well as the way in which objectives/criteria, scores and weights are used to assess, compare, screen in/out or rank options. In this regard, MCA methodologies range from highly rigorous to very simple methods. Rigorous MCA methods such as the Multi-Attribute Utility Theory approaches, the Analytic Hierarchical Process, ELECTRE, PROMETHEE and REGIME, are based on elaborated procedures and sometimes also on advanced mathematical models and computer algorithms. By comparison, simplistic methods tend to disregard the formal MCA theory and often seek to produce a rather rough cardinal ranking of the options at hand by calculating the overall performance of each option as the weighted sum of its single performance scores against the different objectives and criteria considered in the analysis (Dean, 2020(Dean, , 2022. The results of MCA are generally presented by means of graphs, charts or summary tables illustrating the performances of the different options under examination against the identified objectives and associated appraisal criteria. Figure 1 illustrates the basic steps of a MCA exercise aimed at assessing a set of M alternative options (a 1 , a 2 , a 3 , . . . a M ) against a set of N appraisal criteria (c 1 , c 2 , c 3 , . . . c N ), which have been assigned different weights (w 1 , w 2 , w 3 , . . . w N ). This kind of decision problem can be represented in the form of an appraisal summary table whose generic element, x j (a i ), represents the evaluation of the ith alternative by means of the jth criterion.
Like many other appraisal and evaluation techniques, MCA methods can be undertaken either in a non-participatory (i.e. analyst-led) or participatory manner. In non-participatory assessments, the analysis is carried out independently by one analyst or a research team of analysts, according to a typical technocratic approach. The analysts gather, process and interpret data, taking (to the greatest extent possible) a general and independent view of the problem at hand, and ultimately present the results of the analysis to one or a few decision-makers (e.g. a Minister or a Government Department; a person, a group of individuals or a committee with responsibility for the decision).
By contrast, participatory MCA methods seek to adopt a more collaborative and decisionmaking style and, besides analysts and decision-makers, involve some group decision-making participants in the analysis. These participants usually comprise problem stakeholders and, in some cases, the public at large. Sometimes, in the attempt to incorporate a more scientific perspective in the analysis, also some academics and experts can be involved. In operational terms, the steps of participatory MCA methods resemble those of analyst-led MCA techniques. Indeed, the process, led by the research team of analysts, typically encompasses the following key stages (which can also take place in a slightly different order): initial problem analysis and definition; development of alternative options to address the problem at hand; identification of objectives and criteria against which to test options; analysis of the impacts of the alternatives against the different objectives and criteria, and scoring of these impacts; distribution of weights to the objective and associated criteria; and combination of scores and weights to obtain the final option ranking.
However, differently from analyst-led methods, in participatory MCA techniques group decision-making participants can contribute to the identification of the key elements of the multi-criteria framework (i.e. options, objectives, weights and scores). Methodological adaptations of MCA to group decision-making seem thus to have taken place primarily in three main domains (Dean, 2018(Dean, , 2021: • • Identification and selection of potential group decision-making participants.
• • Involvement of participants in the analysis, and management of the multi-actor multicriteria process and group dynamics. • • Collection, processing and inclusion in the multi-criteria framework of the preferences of the different group decision-making participants.
Each domain, however, entails some critical dilemmas and methodological challenges, which will be examined in the remainder of this article through the use a conceptual framework and classification scheme. First, however, the next section describes how data have been collected to investigate the above aspects.

Research methodology
The present study has been based on a comprehensive examination of the literature. The review has considered, in particular, two (partially overlapping) areas of research of participatory MCA: • • Theory and conceptual frameworks regarding the design and evaluation of participatory MCA exercises. • • Studies discussing the (theoretical or practical) applications of participatory MCA methods to different policy problems.
Data collection strategy has been mainly based on a computerised search. Articles have been searched through the digital e-catalogue of the University College London Library, scientific databases (e.g. ScienceDirect) and ordinary web-search engines (e.g. Google). The search has been undertaken by using the Boolean operator 'AND' with the search terms 'participatory' (or 'deliberative' or 'participatory processes' or other similar terms) and 'multi-criteria analysis' or other analogous terms (indeed, in the literature MCA is also known under the names of multiple-criteria decision-making, multiple-criteria decision analysis, multi-objective decision analysis, multiple-attribute decision-making or multidimensional decision-making). The articles thus obtained have been surveyed and further papers have been identified by tracking cited references. The articles selected using the above search strategy have been supplemented by papers recommended by colleagues.
With reference to the first area of research, only a few articles discussing the possible manners in which a multi-actor multi-criteria exercise can be run and the ways in which different viewpoints can be incorporated into the multi-criteria framework (e.g. Belton and Pictet, 1997;D'Este, 2009;Leyva-López and Fernández-González, 2003) have been identified.
As regard to the second areas of research, by comparison, more than 60 (academic and grey) literature publications, almost all of which produced in the last two to three decades, have been examined and over 35 different multi-actor multi-criteria approaches (and several variants of these methods), proposed in the fields of transport and infrastructure planning, energy, natural resource management, health and technology risk assessment have been analysed and compared. Whereas not fully exhaustive, this list of methods and the case studies here investigated can be seen as a fairly representative sample providing useful information on the different possible approaches to participatory MCA. All these articles have been summarised by using a standardised extraction sheet to elicit information about the manner in which group decision-making participants are typically identified, selected and involved in the multiactor multi-criteria exercise, how their preferences are taken into account in the analysis and ultimately included in the multi-criteria framework, and other general features on the proposed methods (i.e. field of application, MCA technique adopted to combine scores and weights, expected duration of the process and cost). The analysis of the literature has also been complemented with insight from numerous discussions with experts and proponents of such methods. Table S1, included in the supplementary material, presents an overview of the main attributes of these methods, which will be presented and discussed in depth in the following sections.

Identification and selection of group decision-making participants and their involvement in the process
In participatory MCA as well as in other participatory evaluation methods the identification of group decision-making participants is obviously a fundamental step of the process and represents one of the most debated topics among scholars and practitioners (e.g. Abelson et al., 2003;Burton, 2009;Cass, 2006;Davies et al., 2005;Falconi and Palmer, 2017;Glucker et al., 2013;Gregory, 2000;Petts, 1999;Sewell and Coppock, 1977a;Weale, 2001). In general, participatory processes can span from small group decision-making procedures, focussing on specific and circumscribed problems and affecting a limited number of individuals, to negotiation processes on complex and uncertain social and policy problems, having implications for a very wide area and a large number of people (Kilgour et al., 2010). Most participatory MCA methods seem to have been expressly conceived to support resolutions on this latter category of problems. For example, the Multi-Criteria Mapping approach has been proposed as a novel method for the social appraisal of the potential risks associated with the use of genetically modified foods (Stirling and Mayer, 2001) and the implications of alternative energy future scenarios (McDowall and Eames, 2007); the Multi-Actor Multi-Criteria Analysis has been put forward as an effective approach for engaging multiple actors in largescale transport infrastructure project appraisal (Cornet et al., 2018b;Macharis et al., 2009) and the resolution of sustainable development dilemmas (Cornet et al., 2018a;De Brucker et al., 2013;Macharis et al., 2012); the Participatory Multicriteria Evaluation technique has been envisaged by Stagl (2006) to open up discussions on energy policy in the United Kingdom; and the Deliberative Multicriteria Evaluation method has been developed by Proctor and Drechsler (2006) to structure complex decision-making processes involving environmental considerations. It is evident that in the case of controversial policy problems affecting society at large the identification of potential group decision-making participants is rarely a straightforward process and can become quite challenging. There is of course no definitive prescription for identifying and engaging stakeholders. However, while some specific techniques are suggested in the literature (Banville et al., 1998;Bryson, 2004;Grimble and Wellard, 1997;Ward et al., 2016a), the great majority of the examined publications on participatory MCA are rather vague on how group decision-making participants should be identified and selected.
In the large majority of the methods reviewed participants are involved in the multi-actor multi-criteria exercise as representatives of organised groups (e.g. local community groups, landowners, business groups, environmental experts), whereas a few methods (e.g. Proctor and Drechsler, 2006;Stagl, 2006) require participants to take part in the process as single individuals (e.g. citizen panels of different sizes). Whereas both approaches have their advantages and limitations (Kahane et al., 2013;Renn et al., 1993), the rationale for these choices and their implications are rarely explored by proponents of participatory MCA methods. For example, although the first approach seems quite sensible, it must be noted that in sharp contrast to normative stakeholder theory, which assumes that problem stakeholders can be subdivided into rigorously defined and rigid classes, very few stakeholder groups are in reality internally homogeneous in terms of values, interests or priorities (Petts, 1999;Vanclay, 1999). A number of partially overlapping groups can thus be identified according to the parameters used to categorise potential decision-making participants (Bootha and Richardson, 2001;Wolfe and Putler, 2002).
Explicit indications regarding aspects such as the optimal number of participants and groups and the ideal number of participants in each group are generally also not offered. Some authors (e.g. Macharis and Nijkamp, 2011) contend that, ideally, in a participatory MCA exercise all the interested and affected parties for the problem at hand should be involved or adequately represented, with no viewpoint excluded a priori. However, it is evident that, especially for major policy problems, there is a trade-off between the objectives of democracy and inclusiveness and the more practical need for creating a workable and efficient process through the involvement of a limited number of people. In this respect, the participatory MCA methods examined in this research involve, on average, 10-30 people, who participate in the process individually or as representatives of a few (i.e. 3-6) stakeholder groups. These figures have been further confirmed by the results of a survey carried out among experts and participatory MCA advocates who argued that working with more than 30 people would make it extremely difficult to manage the process effectively (Dean, 2018).
The comparative analysis undertaken as part of this study has also allowed extracting information about the level of involvement of group decision-making participants in the process. Overall, participatory MCA methods range from limited-participatory techniques, where participants take part only in some stages of the process and thus have the possibility of informing only partially the multi-criteria framework, to fully participatory exercises, in which problem stakeholders (and sometimes experts) are offered the opportunity to provide a critical input for all the elements of the framework. Hence, at one extreme of this ideal spectrum there are methods such as the Goal-Achievement Matrix (Hill, 1968), namely, one of the first participatory MCA approaches proposed in the literature with the view to aiding primarily land use and transportation planning decisions, where participants' preferences are taken into account only during the determination of the weighting of objectives and associated criteria (Approach 'C' in Figure 2). At the opposite end of this ideal spectrum, there is, for instance, the Multi-Criteria Mapping (Stirling and Mayer, 2001), where participants are directly involved in the determination of options, objectives, weights and scores (Approach 'P' in Figure 2). Other methods are characterised by a moderate degree of participation. This is, for example, the case of the Multi-Actors Multi-Criteria Analysis (Macharis et al., 2009(Macharis et al., , 2012, in which participants can determine the list of objectives and their respective weights (Approach 'K' in Figure 2).
To engage with stakeholders and experts, the various participatory MCA methods employ a wide range of different participatory techniques, spanning from simple interviews and structured questionnaires (e.g. Cornet et al., 2018b;Stirling and Mayer, 2001) to workshops, focus groups and in-depth group discussions (e.g. Barfod, 2018;Barfod and Salling, 2015), whereas the majority of the examined methods tend to rely only on one participatory technique, some methods (e.g. Burgess et al., 2007;Renn et al., 1993) adopt and combine different participatory techniques during the different stages of the MCA exercise with the view to eliciting different types of valuation information from participants (e.g. Burgess et al., 2007). In the attempt to facilitate group processes and offer a user-friendly interface to represent complex policy problems, some methods also make large use of specialised software (Baudry et al., 2018) and visualisation tools (e.g. Lami et al., 2011;Pensa et al., 2013).
On one hand, as highlighted in the literature on participatory and deliberative evaluation (e.g. Falconi and Palmer, 2017;Hare, 2011), it is sensible to assume that different problems require a different level of involvement of participants and different engagement techniques. On the other hand, also concerning these important aspects, only in a few of the examined articles (e.g. Burgess et al., 2007;Cornet, 2018b;Renn et al., 1993;Stirling and Mayer, 2001), it is possible to find thorough explanations and justifications for the adopted participatory approaches.

Treatment and inclusion of the individual preferences in the multi-criteria framework
One of, if not the most, critical aspects of participatory MCA methods is represented by the manner in which the positions of the different group decision-making participants are combined and included in the multi-criteria framework that is used to increase understanding and support decisions and actions on the problem under study. This framework, as already discussed in the second section, consists of all the possible alternative options to address the problem, the list of objectives (and associated criteria), and the sets of performance scores and criterion weights. Especially in decision-making procedures, where participants have very different or even diametrically opposed interests and priorities, the inclusion of multiple points of view in the multi-criteria framework can turn out to be a rather difficult and controversial task. This aspect, however, does not seem to have received much attention in the MCA literature. Indeed, only some authors (e.g. Macharis and Nijkamp, 2011;Munda, 2008;Stirling, 2006;and Stirling and Meyer, 2001) discuss in their papers, although quite briefly, the rationale behind the strategies adopted to handle data and information provided by the different group decision-making participants. Conceptual schemes, identifying possible approaches for incorporating multiple viewpoints in the analysis, have been proposed by Belton and Pictet (1997); Leyva-López and Fernández-González (2003) and D'Este (2009). However, although useful, these models are not capable of adequately capturing the full spectrum of participatory MCA methods available in the literature. Here a more comprehensive framework, which expands on these previous works, is proposed. The framework, as presented below, envisages five different basic strategies for identifying options, objectives, weights and scores, while dealing with different points of view ( Figure 3).

•
• Exclusion: a specific component of the multi-criteria framework that is being defined (i.e. the catalogue of feasible options, the list of objectives, the set of weights or the set of scores) is established directly by the research team of analysts, irrespective of the standpoints of the various group decision-making participants. • • Filtration: the positions and preferences of the different group decision-making participants regarding a given element of the multi-criteria framework (i.e. options, objectives, weights or scores) are examined by the analysts of the research team. This information might then be combined with other data derived from other sources and used by the research team to define that particular component of the framework. • • Sharing: the various group decision-making participants engage in a process of negotiation, opportunely mediated by the research team, with the view to reaching a consensus of opinion over a given element of the framework (i.e. an agreed list of alternative options to be assessed; a common list of objectives on which general consensus has been achieved; a shared set of weights or scores). • • Aggregation: the research team identifies a common value for a given element of the multi-criteria framework, through the construction of a representative value that minimises the differences between the positions of the various group decision-making participants. • • Disaggregation: the points of view of the different participants regarding a given element of the multi-criteria framework are kept apart from each other and included separately in the framework.
As noticeable from Figure 3, exclusion, filtration, sharing and aggregation strategies are explicitly aimed at generating a common value for the element of the multi-criteria framework under examination (i.e. a common list of options, a common list of objectives and associated criteria, a common set of weights or a common set of scores). These approaches, however, all stand on contrasting assumptions and imply different procedures, which are likely to lead to different results. With the disaggregation strategy, however, the viewpoints of the different group decision-making participants are made explicit in the analysis. With this approach, multiple values for the element of the multi-criteria framework under examination (i.e. multiple lists of objectives, and multiple sets of weights or scores, which reflect the different interests and priorities of participants) are obtained. The exclusion strategy represents a non-participatory approach to the analysis and disregards entirely the opinions of the various group decision-making participants. With the filtration approach, by comparison, the viewpoints of participants are examined by the analysts of the research team, who then establish whether and how to include this information in the multi-criteria framework. Finally, with the sharing, aggregation and disaggregation strategies, the information provided by participants is directly processed and used to develop the multi-criteria framework, although some assistance from the research team may be needed to avoid inconsistencies (e.g. exclusion of unfeasible options from the list; rewording of some objectives to improve clarity; reconciliation of similar objectives and criteria to avoid double counting issues).
As displayed in Table 1 and described further below, the five above-mentioned strategies can be applied to (almost) all the elements of the multi-criteria. Some of these strategies can also be applied to the final option ranking obtained as a result of the multi-criteria assessment.

Determination of options
The identification of a set of alternative options typically represents one of the first steps of any MCA process. The participatory MCA methods examined in this study adopt different approaches for determining the options to be appraised. The practical need for ensuring an essential degree of comparability between the standpoints of group decision-making participants imposes, however, the use of a set of options common to all participants. Indeed, it is evident that a situation where the different parties involved in the MCA exercise consider different sets of alternatives would prevent the research team leading the multi-actor multi-criteria process from formulating any meaningful recommendations concerning the best course of action to address the problem at hand. Therefore, the disaggregation approach is not applicable to option definition, whereas all the other four strategies can in principle be adopted at this stage.

•
• Exclusion: the simplest and most straightforward approach sees options being defined in advance, before the formal commencement of the MCA process, or being determined by the research team, irrespective of the points of view of participants. • • Filtration: to determine the options to be assessed, the research team can also draw (more or less extensively) on the information gathered through a series of interviews and workshops with the different group decision-making participants. • • Sharing: the list of feasible options can also come directly from participants through a negotiation process mediated by the research team, where different interests and priorities are explored with the view to arriving at a shared list of plausible alternative courses of action to address the problem at hand. • • Aggregation: finally, the research team can also arrive at a final set of options by simply including in this list all the different (feasible and realistic) alternative proposals suggested by the different individuals and/or groups taking part in the process, without the need for participants to cooperate and achieve consensus on this list.

Determination of objectives (and criteria)
Differently from what required in the case of options, comparability between the positions taken by the different participants on objectives (and associated appraisal criteria) does not represent an essential requirement for the practical feasibility of the process. Therefore all the five basic strategies presented at the beginning of this section can be employed for the identification of objectives.

• • Exclusion: a common set of objectives can be identified by the research team, totally
independently of the preferences of participants. To do so, the research team can draw, for example, on some studies, reports and policy documents concerning the given decision-making situation. • • Filtration: the research team can also use data concerning the preferred objectives of the various group decision-making participants as one of the sources of information for determining a set of objectives common to all the parties involved in the exercise. • • Sharing: a shared list of objectives and appraisal criteria can be obtained through a mediation and consensus-building process between the various group decision-making participants. • • Aggregation: a common list of objectives can be developed by the research team in a more mechanistic manner, by combining and integrating together the single lists of objectives suggested by the different group decision-making participants. Group discussions and final consensus on this list are not required. • • Disaggregation: finally, participants can also be allowed to assess the options by employing exclusively their own objectives. Therefore, in this situation, assuming that participants have different interests and aims, as many distinct and contrasting lists of objectives as the number of groups (for participatory MCA methods where participants take part in the process as representatives of organised groups) or participants (for participatory MCA methods where participants take part in the process as single individuals) involved in the process will be eventually incorporated separately in the multi-criteria framework.
Once objectives have been identified by employing one of the above-mentioned strategies, the next step is to select opportune criteria (one for each objective) that can measure the extent to which the options under investigation meet the chosen objectives. In participatory MCA, this is generally considered a rather technical step and is typically carried out by the research team of analysts leading the analysis based on the available data and information. It should not be forgotten, however, that the selection process of the indicators is not totally unbiased and immune from outside pressures (Gallopín, 1996;Lyytimäki et al., 2014;Niemeijer and de Groot, 2008).

Determination of scores and weights
Similarly to the list of objectives, neither the set of scores nor the set of weights needs to be common to all the various group decision-making participants for the multi-actor multi-criteria process to be workable. Hence, in principle, all the five strategies can be used to define scores and weights.

•
• Exclusion: the research team of analysts can directly identify a set of scores and/or weights regardless of the actual positions of the various group decision-making participants on these elements of the multi-criteria framework.
• • Filtration: a set of scores and/or weights common to all the various parties can also be identified by the research team, after an analysis of the priorities and opinions of the different group decision-making participants. • • Sharing: a common set of scores and/or weights can be derived from a negotiation exercise among the various group decision-making participants with the assistance and mediation of the research team. • • Aggregation: common scores and/or weights can also be determined as the arithmetic mean (or as the geometric mean or as a similar average value) of the individual performance scores (or criterion weights) provided by the different individuals and/or groups taking part in the process. • • Disaggregation: finally, the parties involved in the multi-actor multi-criteria exercise can also be given the possibility of using their own scores and/or weights. Hence, if the points of view of participants differ, different sets of scores and/or weights will be included in the overall multi-criteria framework.

Determination of the final rankings
Some participatory MCA methods rely exclusively on one strategy to develop the multi-criteria framework while dealing with multiple viewpoints. This is, for example, the case of the Deliberative Multicriteria Evaluation (Proctor and Drechsler, 2006), where the various parties involved in the process are required to negotiate and eventually reach an agreement over the list of options to be appraised, the list of objectives and their relative importance weights, and the performance scores of the options against the different objectives (i.e. sharing approach to options, objectives, scores and weights). In other participatory MCA methods, by comparison, options, objectives, weights and scores are identified by employing different strategies. In the Policy-Led Multi-Criteria Analysis (Ward et al., 2016a(Ward et al., , 2016b, for instance, objectives and respective weights are determined by the research team independently from the positions of group decision-making participants (i.e. exclusion approach to the definition of objectives and weights). Participants are then asked to brainstorm and develop together the list of options to be assessed (i.e. sharing strategy for the identification of options). Finally, the different stakeholder groups taking part in the process are allowed to score, according to their point of view, the performance of each option against the various objectives and associated criteria (i.e. disaggregation approach to scoring). In the Multi-Actors Multi-Criteria Analysis (Macharis et al., 2009(Macharis et al., , 2012, stakeholder groups can instead adopt their own list of objectives and use their own weighting schemes (i.e. disaggregation approach to the definition of objectives and weights), whereas options and performance scores are typically defined by the research team (i.e. exclusion strategy for the identification of options and the elicitation of scores). Once the multi-criteria framework, consisting of options, objectives and criteria, scores and weights, has been developed, a final ranking of the options can be derived so as to determine the best course of action to address the problem at hand. The large majority of the participatory MCA methods proposed in the literature tend to rely on rather simplistic and user-friendly MCA techniques (te Boveldt et al., 2021), where generally the overall performance of an option is computed as the weighted sum of the single performance scores of the option against the different objectives and associated criteria considered in the analysis. Typically, in a multiactor multi-criteria exercise, all the group decision-making participants are required to adopt the same MCA technique to combine scores and weights. A department from this assumption is represented by the method proposed by De Keyser and Springael (2002), in which groupdecision-making participants are allowed to choose also their own MCA techniques. However, differently from what these authors claim, it appears hardly possible for this method to open new perspectives and opportunities for group decision-making. Indeed, the various MCA methodologies are based on contrasting assumptions and rules so that, when applied to the same problem, two different MCA methods are likely to produce different and hardly comparable results (Dean, 2020(Dean, , 2022. Furthermore, since many MCA methods, and especially the ones involving elaborated procedures and advanced mathematical calculations, are not easily comprehensible for non-specialists, ideally, only real MCA experts would be able to take part in De Keyser and Springael's participatory MCA exercise. The final output of a participatory MCA process can be represented by either a single overall option ranking common to all the participants (i.e. a final multi-actor view), or different option rankings, one for each participant or group taking part in the process (i.e. single-actor/ group views). Clearly, the type of output depends on the specific strategies used to handle the preferences of the group decision-making participants when determining objectives, scores and weights. Specifically, if in determining these parameters, the research team leading the analysis decides to adopt only the exclusion, filtration, sharing and/or aggregation strategies, namely, approaches, which are aimed at generating a common value for the element of the multi-criteria framework under examination, the output of the analysis will be represented by a single multi-actor view. This is, for example, the case of the Deliberative Multicriteria Evaluation method described above. This multi-actor view, representing a sort of synthesis of the viewpoints of the different group decision-making participants on the multi-criteria problem at hand (i.e. the possible feasible courses of action to address the problem, the key objectives that must be pursued and their relative importance, and the performances of the options against these objectives), is then supposed to be used by decision-makers as a basis for making decisions and taking action on this matter.
If, however, the research team opts for employing the disaggregation approach when determining objectives, weights and/or scores, different single-actor/group views, one for each group (for participatory MCA methods where participants take part in the process as representatives of organised groups) or individual (for participatory MCA methods where participants take part in the process as single individuals) involved in the process will be produced. The various single-actor/group views, illustrating the specific viewpoints of the different actors or groups taking part in the process, can differ in terms of number and types of objectives and associated criteria; weights ascribed to the various objectives and criteria; performance scores; any two of these three parameters; or even all of them, according to which stage of the process the disaggregation strategy has been applied to (Figure 4). The charts or performance summary tables showing the different option rankings can be then used by decisionmakers as a support for the final decision. The various single-actor/group views can also be combined together into a final multi-actor view in the attempt to provide more prescriptive recommendations and clearer support for decision-makers: • • Sharing: a multi-actor view can be obtained through a process of negotiation, where the various parties, opportunely assisted by the research team, explore commonalities and differences between the single-actor/group views, in an effort to reach a consensus on a certain global ranking and a preferred option. This is, for instance, the approach adopted by the Policy-Led Multi-Criteria Analysis, where the different single-actor/group views obtained in the first place from the process differ in terms of performance scores. • • Aggregation: the research team can also produce a global final option ranking as the (arithmetic) mean of the different of option rankings produced by the different group decision-making participants. This approach is adopted by the Multi-Actors Multi-Criteria Analysis where the primary output is represented by multiple single-actor/ group views, which differ in terms of objectives and weights. • • Filtration: the research team can also produce a global multi-actor view by simply examining and comparing the various single-actor/group views, without necessarily calculating the mean of the different option rankings.

Issues with the inclusion of different perspectives in the analysis
As highlighted in the previous sections, proponents of participatory MCA methods rarely offer thorough explanations and justifications for the particular approach adopted to derive options, objectives, weights and scores, from a wide variety of different, and often conflicting, viewpoints. It is evident, however, that the five strategies presented in this article rest on very different ontological or epistemological assumptions. Both the exclusion and filtration strategies attach great value to the discovery of the objectively 'best' solution to address the problem at hand, with the identification of this ideal solution that is placed in the hands of a few analysts and specialists. These two modi operandi widely reflect a typical analyst-led appraisal exercise undertaken in a technocratic manner, with the research team that is largely (i.e. filtration) or entirely (i.e. exclusion) responsible for the identification of the key elements of the multicriteria framework, which all the participants are required to adopt, irrespective of their interests and priorities.
The assumption behind the aggregation approach is that there is no such thing as the 'perfect' solution, but rather different interests should be carefully aggregated and balanced. This approach, which is extensively used in mainstream economics (e.g. CBA) and mathematical decision theory, assumes that social preferences can be obtained by simply aggregating individual preferences (Just et al., 2004), without the need for explicitly discussing possible differences in values.
By comparison, participatory MCA methods adopting the sharing strategy are those ones that, more than others, appear to be particularly exposed to the ideas of communicative and collaborative planning theories and deliberative and discursive democracy (e.g. Forester, 1999;Healey, 1998Healey, , 2003Innes, 1996;Innes and Booher, 2003). Similarly to the aggregation strategy, the sharing approach also rejects the concept of an objectively identifiable 'right' decision, but relies on communication and dialogue between stakeholders rather than on the aggregation of different interests. With these methods appraisal and evaluation become an interactive learning process, where all the actors involved in the process can explain their values and concerns and, at the same time, have the opportunity to better appreciate the social identities of the other parties in the attempt to develop a shared understanding of the problem at hand and identify potential win-win solutions. This approach promises to identify common interests and shared values through the exchange of information and reflections and replace animosity with trust by reframing contentious policy controversies.
Finally, the disaggregation approach grounds on the assumptions that information can be uncertain and subject to different interpretations, and any attempt to obtain a global perspective on a problem under examination may entail the risk that the views and wishes of certain parties are discriminated. Therefore, this approach attempts to map the point of view of each individual or group taking part in the process, while keeping these different perspectives strictly separated. The focus of this strategy is on 'opening up' the analysis to account, to the largest extent possible, for neglected perspectives, excluded possibilities and ignored aspects (Stirling, 1998(Stirling, , 2006(Stirling, , 2008.
Begin based on different principles and assumptions, these different strategies obviously present some important implications and issues for the multi-actor multi-criteria process in which they are applied. These issues appear to be mainly related to three spheres: • • The practical feasibility of the process.
• • The reliability and usefulness of the outcome of the participatory exercise.
• • The resources (i.e. time, money and level of expertise) required to run the process.
All these aspects are examined further below.

Practical feasibility of the process
The feasibility of the process is an important aspect to consider when designing a participatory MCA exercise. Whereas the large majority of articles on participatory MCA tend to emphasise the clear and straightforward nature of these methods, multi-actor multi-criteria exercises may not always imply quick and smooth processes. From a practical feasibility perspective, the exclusion, filtration, aggregation and disaggregation strategies constitute rather unequivocal and clear-cut approaches for developing the multi-criteria framework of options, objectives, weights and scores. Indeed, with the exclusion and filtration strategies, the research team of analysts is ultimately responsible for the identification of the key elements of the framework. With the aggregation strategy, by comparison, the analysts are only required to identify (or calculate mathematically) the values that minimises the differences between the positions of the different group decision-making participants. Finally, under the disaggregation approach, the points of view of the different participants are simply included separately in the framework.
However, with the employment of the sharing approach to determine options, objectives, weights and/or scores, the captivating goal of developing a shared multi-criteria framework and ultimately arriving at a mutually convenient solution may require long and complicated negotiations between participants. In particular, the literature on conflict resolution and management (e.g. Ansell and Gash, 2008;More, 2003;Schenk et al., 2016;Susskind et al., 1999) stresses that the creation of a successful consensus-building process requires a number of conditions to be met: • • There should be strong incentives for all the parties to take part in the process.
• • All the parties should be able to participate and express their view on the problem at hand. • • All the parties should have equal access to information and other resources and there should not be significant power imbalances. • • The different parties should be willing to cooperate and learn about each other's social identities.
These requirements, however, seem to be quite difficult (if not impossible) to achieve in real decision-making situations on major policy problems, particularly when stakes are high, facts are uncertain and ambiguous. and there are many stakeholders who have different or even totally opposite interests, with little room for compromise. In these circumstances, the hypothesis that clear consensus positions on which to base decisions will eventually emerge may prove excessively optimistic. Practical experiences with negotiation processes with multiple stakeholders have shown that the chances for a group of actors having different agendas to ultimately arrive at the formulation of a shared frame of analysis are extremely limited (Driscoll, 1996;Pasquero, 1991 andTurcotte andPasquero, 2001). These experiences have demonstrated that consensus can be achieved only over general and rather vague principles (e.g. the importance of pursuing a sustainable development path; the necessity to ensure an equitable distribution of resources), while there is little or no agreement over more specific parameters (e.g. how and at which scale to measure sustainable development; what actually constitutes a fair distribution of resources). Hence, in a participatory MCA process where the interests and priorities of decision-making participants and groups fundamentally clash, the use of the sharing strategy for defining objectives, weights and/or scores implies high possibilities of deadlock as each party tries to impose its own logic over the others and each has a veto power. For example, as demonstrated by Dean (2018Dean ( , 2021 and Dean et al. (2019), different parties having diverging belief and value systems would probably adopt different objectives and criteria to investigate the same matter. The identification of a common weighting scheme can also easily result in several conflicts between participants. Chadwick (1978:276) summarises the situation as follows: weighting 'is a process which is theoretically impossible [. . .]. How might interest groups agree to a weighting which placed their own weight lower than that other?' Echoing Chadwick's opinions, Manheim et al. (1974: 161) argue that 'only a very naive group' would be willing to agree on a set of weights, which ultimately undermines their interests. Clearly, then, there is also no reason to believe that the various participants, with different backgrounds and contrasting agendas, would be able to agree on the performance scores of the different options at hand (Dean, 2018(Dean, , 2021Dean et al., 2019). Obviously, the more people and groups are allowed to participate in the multi-actor multi-criteria process and the more difficult finding an agreement on objectives, criterion weights and performance scores becomes.

Reliability and usefulness of the results
The selection and involvement of group decision-making participants in the process constitutes a significant barrier to the reliability and validity of multi-actor multi-criteria techniques and other participatory methods, especially when applied with the view to aiding the resolution of complex policy controversies affecting society at large. As already explained in the fourth section, given the practical impossibility of involving everyone in the process, a discretionary decision has to be made on what actors and groups to involve. Therefore, instead of reflecting (or at least closely resembling) an idealistic participatory evaluation process, where every person can have a say in decisions that affect them, multi-actor multi-criteria exercises very often only involve a few tens of problem stakeholders. This figure is substantially lower than the number of people typically involved in standard consultation procedures and public hearings commonly adopted as part of many planning and policy-making processes. Such participatory appraisal and evaluation exercises obviously do not satisfy the requirements of statistical representativeness (Gavelin et al., 2007;Kenyon et al., 2001) and, although useful to decision-makers to improve the knowledge of a problem at hand, can hardly be considered a way for deriving consistent conclusions on social preferences and for arriving at comprehensive assessments (Dean, 2021). The selection of this small number of participants and groups is also fraught with difficulties and challenges and implies largely arbitrary considerations on a number of interdependent factors, including the nature of the problem at hand, its geographical boundaries, and the dimensions of the problem to prioritise (Kahane et al., 2013;Stirling, 2006). There is a persistent risk of missing stakeholder groups that instead should have been included and also the danger of reinforcing existing patterns of social and political disparities (Falconi and Palmer, 2017). Indeed, in many cases, the choice of which stakeholders to involve may lean (intentionally or unintentionally) towards the most organised, and often most powerful groups, that have consolidated themselves as a public presence (Kahane et al., 2013). Recognising the fact that participatory evaluation methods are inevitably partial, Gregory (2000) considers it imperative to critically reflect on this lack of comprehensiveness and always provide valid arguments for the inclusion or exclusion of some people and groups in the participatory process. However, such reflections and justifications are rarely offered in the literature on participatory MCA. Therefore, in light of the above issues and in absence of thorough reflections on these intrinsic dilemmas in the design of the process, a multi-actor multi-criteria exercise may risk representing even a step backwards with regard to democracy and equity principles (Dean, 2018).
Furthermore, whereas according to its proponents, MCA represents an effective framework for eliciting people's opinions as its underlying principles are closely similar to the way humans have always been making decisions (e.g. Figueira et al., 2005a), several studies seem to disprove this argument. For instance, according to Miller (1956) and Arrow and Raynaud (1986), the inherent limitations of short-term memory make an individual unable to consider simultaneously too many factors (e.g. objectives, criteria, weights and scores) when taking a decision. Practical applications of participatory MCA methods carried out by Dean (2018Dean ( , 2021 and Dean et al. (2019) have shown that participants might have difficulties in formulating meaningful objectives, a complete and consistent set of weights to reflect the relative importance of these objectives or even more simply understanding the meaning of the basic elements of a multi-criteria framework (e.g. the actual difference between performance scores and criterion weights) and how these are ultimately combined (i.e. aggregation rules). Whereas several visual aid tools and different scoring and weighting techniques have been developed to facilitate and simplify the evaluation process (Dean, 2022), it seems advisable that group decision-making participants should at least take some crash courses on MCA before being involved in a multi-actor multi-criteria exercises, especially those ones entailing the use of highly formal MCA methods.
The strategies adopted to handle data and information provided by the different group decision-making participants unavoidably affect also the reliability, validity and utility of the results of the multi-actor multi-criteria exercise. With the exclusion and filtration strategies, for example, there is the risk that the multi-criteria framework developed rather independently and arbitrarily by the research team may dismiss or misrepresent the viewpoints of the various group decision-making participants.
Similar problems are also implied by the aggregation strategy, where, in practice, the concepts of 'consensus' and/or 'compromising solutions' are confused with the mere calculation of the average of a wide spectrum of values. In this regard, Arrow (1951), in his 'Impossibility Theorem', has demonstrated in formal mathematical terms that, in a plural society, there exists no analytical procedure through which individual preferences can be aggregated together in a democratic and consistent manner, irrespective of how much information is available and how much consultation and consideration are involved. To put it another way, a purely mathematical approach is unable to either address the conflicts of interests of different stakeholders or reconcile the divergent frames of reference that these actors employ. It follows that the median values obtained through aggregation procedures are theoretically weak and extremely likely to lead to a compromise that is uncomfortable and unstable. Indeed, since neither party is completely satisfied with the results of the process, conflicts between them are quite likely to re-emerge at a later stage (Dean, 2018). The aggregation approach is also severely exposed to the possibility of cognitive and motivational biases (Montibeller and von Winterfeldt, 2015). The former type of bias may occur when, for example, at the beginning of the process, a party presents already a clear idea of the problem under examination, so that, throughout the multiactor multi-criteria exercise, unconsciously, tends to select certain objectives supporting that particular position and assign to them very high weights, while discarding all the data and information potentially disproving that position (Macharis and Nijkamp, 2011). Motivational biases, by comparison, imply people intentionally and strategically setting scores and weights to increase the probability that a particular goal will occur or to put deliberately other parties at a disadvantage (Sager, 2003;te Boveldt et al., 2021;Yearley, 2001). For example, some respondents may attach very low weights to criteria, which they believe their opponents might give high priority, and/or ascribe very low scores to others' likely favoured options, in the hope of dragging down the aggregate statistical importance of those criteria and the overall performances of those options. Unconscious biases and strategic misrepresentations, whose boundaries often appear to be quite fuzzy, may be also favoured by the largely arbitrary nature of MCA, for which, given also the large variety of methods and techniques belonging to this family, there are no universally agreed rules and principles guiding the selection of criteria, weights and scores (Dean, 2020(Dean, , 2022. The sharing approach based on reflection, dialogue, mutual interaction, cooperation and group learning can potentially prevent or at least reduce cognitive and motivational biases when eliciting options, objectives, scores and/or weights. However, sometimes the negotiation processes aimed at developing the shared multi-criteria framework can become unfair with more skilled parties dominating the discussions and imposing their logic over others actors and groups not willing or not really able to speak in public (Petts, 1999;Saarikoski, 2000). In other cases, consensus may be implicitly or explicitly forced by facilitators and mediators in the attempt to arrive at a final answer within a reasonable time frame, so as to respect a given schedule (D'Este, 2009). Hence, these and other similar events are likely to cause doubts and dissatisfaction among group decision-making participants who may feel that their points of view have not been adequately captured by the analysts.
According to several authors (e.g. Clark et al., 1998;Macharis and Nijkamp, 2011;Stirling, 2006Stirling, , 2008Stirling and Mayer, 2001;te Boveldt et al., 2021), the disaggregation approach might lead to a more open, transparent and democratic evaluation process. Indeed, this strategy tries explicitly to capture the belief and value systems of the various group decision-making participants taking part in the process. It ultimately provides decision-makers with a wide array of information concerning the way the different parties frame the problem, highlighting both differences and commonalities in the positions of the different actors (e.g. the most relevant objectives for each party; the objectives that seem to be important for all the parties; the objectives that imply the highest form of disagreement; the preferred option for each party; the options that are assigned relatively high performance scores from all the group decision-making participants; and the most controversial options). There is, however, the risk that the disaggregation approach may do little more than uncover this fundamental clash of frames. Indeed, while the need for deciding and acting, once the relevant information has been collected, remains firm, examining carefully all these multiple and diverse different frames of thought in reaching a conclusion can be very problematic (Owens et al., 2004;Van Eeten, 1999Yearley, 2001). Figure 5 illustrates this complexity in the case of a multi-actor multi-criteria exercise aimed at assessing a set of M alternative options (a 1 , a 2 , a 3 ,. . .a M ) and involving G group decision-making participants who are given the opportunity to identify their own list of objectives and associated criteria C, assign their own importance weights W to these objective and criteria, and ascribe their own scores to evaluate the performances of the options under examination against the identified objective and criteria. The generic element of the performance summary tables, x j (a i ) K , represents the evaluation of the ith alternative by means of the jth criterion according to the perspective of the kth stakeholder group involved in the participatory MCA process. Since stakeholder groups typically present different interests and priorities, such a participatory MCA process is likely to lead to as many lists of criteria, weighting schemes and sets of scores as the number of groups involved. So far clear indications on how to process systematically and comprehensively these constellations of opinions, and how to derive clear outcomes from multiple and diverging multi-criteria appraisal summary tables have not been proposed by any authors. Hence, eventually, in the attempt to adopt a resolution on the problem under examination, analysts and decision-makers would find themselves in need of reconciling and synthetising these different arrays of objectives and criteria, weighting schemes and/or scores into a more manageable and understandable whole (McAllister, 1988;Stirling, 2006). As explained in the 'Determination of the final rankings' section, in principles, the various single-actor/group views can thus be combined into a global multi-actor view most usefully oriented towards the aim of providing decision justification. However, the construction of a global multi-actor view may expose the process to the same issues discussed above for the other strategies (i.e. inconsistencies; loss of critical information; misrepresentation of the viewpoints of participants; cognitive and motivational biases), which the disaggregation approach was instead intended to avoid in the first place.

Resources required to run the process
Only very few of the articles examined in this study offer precise estimates of the costs and time required to run the proposed participatory MCA methods. It is evident, however, that the various strategies employed to build the multi-criteria framework are extremely likely to imply also major differences in terms of resources required to manage such participatory evaluation exercises. Among the various approaches, sharing is obviously the most demanding strategy. Indeed, developing a shared and commonly agreed multi-criteria framework represents an ambitious aim, which generally necessitates a very long time and highly skilled and experienced facilitators. The sharing approach expressly requires also the various parties to be in the same room. This requirement for a lot of face-to-face time can be expensive and can pose serious logistics problems with scheduling meetings (McAllister, 1988). Hence, besides the identification of areas of common ground between group decision-making participants, this approach also has to face the (almost equally problematic) challenge of finding a mutually convenient time for the actors to meet. Particularly when the evaluation process is conducted over a prolonged period of time, the sharing strategy may thus cause non-participation and dropouts of participants owing to the lack of free time (these issues have been documented, for example, in Kowalski et al., 2009;Renn et al., 1993;Ward et al., 2016b).
By comparison, the exclusion, filtration, aggregation and disaggregation strategies generally necessitate fewer resources than sharing, in terms of time, money and demand for facilitator expertise. Indeed, the exclusion strategy does not require the involvement of group decision-making participants. The aggregation, disaggregation and filtration approaches, while requiring a critical input from group decision-making participants, do not necessarily involve group discussions and negotiations so that each individual and/or group involved in the exercise has the possibility of working simply with the research team, totally independently from the other parties. Ideally, the process should be iterative, with actors and groups free to examine the implications of their choices and possibly modify them several times, before ultimately communicating their final views to the research team (e.g. Stirling and Mayer, 2001). However, in many cases, due to practicality reasons, the process tends to be much quicker, with the research team eliciting the opinions and preferences of participants through simple email and phone interviews and electronic surveys (e.g. Macharis et al., 2010).
In the attempt to further speed up the process, reduce its costs and encourage wider participation, some authors (e.g. Musso et al., 2007) have proposed the use of specific software and online questionnaire tools for criteria, weights and score elicitation. An online and predominantly asynchronous process, however, might imply the risk of not providing participants with enough time and support to properly assimilate the key principles of the MCA exercise and the nature of the problem under investigation. Moreover, as a result of limited discussions and interactions between the research team and group decision-making participants, such a process might also risk excluding critical information that is essential for arriving at a rich representation of the policy problem at hand. According to McAllister (1988: 265), collecting only lists of desired objectives and numerical information regarding performance scores and criterion weights without asking participants to justify and qualify their responses 'would be like communicating in monosyllable words' (p. 265).

Conclusion
Over the past few decades, several attempts have been made to enhance the participative character of conventional appraisal methods, especially MCA. Participatory MCA has come to be seen by many as a plausible and valuable methodology to tackle a great variety of complex and controversial policy problems. Despite the growing popularity of multi-actor multi-criteria methods, several critical aspects of these participatory evaluation exercises, above all the way in which different and often contrasting viewpoints should be processed and included in the multi-criteria framework, seem to have received little attention. In this article, a conceptual framework and classification scheme has been proposed in the attempt to fill this knowledge gap and pave the way for further studies in this area. The conceptual framework, developed based on a comprehensive examination of the literature, envisages five different basic approaches for identifying options, objectives, weights and scores, while dealing with different points of view (i.e. exclusion, filtration, sharing, aggregation and disaggregation). This article has argued that multi-actor multi-criteria exercises do not necessarily and automatically lead to more comprehensive, more democratic and more transparent decisions, and that any approach to participatory MCA presents some advantages, disadvantages and issues. The latter includes the question of representation and the requirements of statistical representativeness, the potential loss of critical information when developing the multi-criteria framework, the possible misrepresentation of the viewpoints of participants, and cognitive and motivational biases.
More empirical research and more comprehensive and objective analyses are needed to test which approaches to participatory MCA might work best under some specific conditions and why, and how a different framing of the process might affect its feasibility as well as the reliability and utility of the final results. However, what clearly emerges from this discussion, in line with what also argued by Merino-Saum (2020), is that there are some key overarching factors that should orientate the selection of one specific approach to participatory MCA over the others (see also Dean, 2021). These factors include: • • Main features of the policy problem at hand (e.g. urgency of the problem, potential implications, number of people affected, and level of conflict). • • Aim of the analysis and type of output required (e.g. comprehensive assessments crossing the threshold of statistical significance versus small participatory evaluation processes aimed only at complementing other analyses; 'closing down' the analysis with the provision of unitary and prescriptive advice versus 'opening up' the analysis by highlighting similarities and differences in the positions of the different group decision-making participants). • • Main assumptions and ontological and epistemological positions of the research team and the final decision-makers (e.g. the discovery of the 'best' solution to address the problem at hand versus the construction of satisfactory solutions; public versus expert knowledge; the mathematical aggregation of individuals' preferences versus the need for deliberative democracy). • • Characteristics of group decision-making participants (e.g. level of familiarity with MCA and negotiation processes; knowledge of and attitude towards the problem at hand; willingness to participate in the process and time availability; power dynamics and mutual relationships). • • Resources available (e.g. time and budgetary constraints; availability of software and experienced facilitators).
These factors unavoidably entail some incompatibilities and difficult trade-offs. For example, as discussed in the article, the aspiration to create a mutual learning and deliberative problem-solving process might not be fulfilled, if uncooperative attitudes and major power imbalances among stakeholders exist. Similarly, the aim of, say, carrying out a comprehensive participatory evaluation exercise might be hampered by strict time and budgetary constraints. The above calls for careful reflection from researchers and practitioners when designing the structure of a multi-actor multi-criteria exercise.

Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship and/or publication of this article.

Funding
The author(s) received no financial support for the research, authorship and/or publication of this article.

Supplemental material
Supplemental material for this article is available online.