Distinguishing between legitimate and illegitimate roles for values in transdisciplinary research

In this paper, we argue that the new demarcation problem does not need to be framed as the problem of defining a set of necessary and jointly sufficient criteria for distinguishing between acceptable and unacceptable roles that non-epistemic values can play in science. We introduce an alternative way of framing the problem and defend an open-ended list of criteria that can be used in demarcation. Applying such criteria requires context-specific work that clarifies which principles should be used, and possibly leads to the identification of new principles – which then can be added to the open-ended list. We illustrate our approach by examining a context where distinguishing between acceptable and unacceptable value influences in science is both needed and tricky: transdisciplinary


Introduction
In their key piece in this special issue, Holman and Wilholt (forthcoming) define "the new demarcation problem": how do we demarcate between acceptable and unacceptable value influences in science, when we agree that non-epistemic values play an important role in all stages of scientific research? They argue that none of the existing strategies (e.g., Anderson, 2004;Douglas, 2009;Longino, 2002) can offer necessary and sufficient criteria for acceptable value influences in science. Yet there is a need to distinguish between legitimate and illegitimate roles for values in science, as non-epistemic values can undermine the epistemic integrity of scientific research.
In this paper, we argue that the new demarcation problem does not need to be framed as the problem of defining a set of necessary and jointly sufficient criteria for distinguishing between acceptable and unacceptable roles that non-epistemic values can play in science. We introduce an alternative way of framing the new demarcation problem. Our strategy involves an open-ended list of criteria which can often be used in demarcation even though none of the criteria is a necessary condition of legitimate non-epistemic value influence in science. Applying this strategy requires context-specific work that clarifies which criteria should be used and how the criteria are to be interpreted, and possibly leads to the identification of new criteriawhich can then be added to the open-ended list.
In section 2, we revisit the old demarcation problem in order to better understand why many philosophers have given up the attempt to define a set of necessary and jointly sufficient conditions for science. The debate over the old demarcation problem provides us with a clue as to how the new demarcation problem can be framed. In section 3, we discuss principles that should be included in the open-ended list of demarcation criteria. In section 4, we illustrate our approach by examining a context where distinguishing between acceptable and unacceptable value influences in science is both needed and tricky: transdisciplinary research.

We do not need all-purpose necessary and sufficient criteria for demarcation
To provide a background for our strategy, let us turn to the old problem of demarcationthe one where the aim was and is to demarcate between science and pseudoscience. This used to be seen as a central task in philosophy of science. Already for decades it has, however, been treated largely as a problem belonging to the history of the field. It is only relatively recently that it has gained some renewed attention (e.g., Resnik, 2000;Mahner, 2007;Pigliucci & Boudry, 2013;Hansson, 2009Hansson, , 2013. These recent attempts to address the old problem of demarcation differ significantly from the earlier, more familiar ones (e.g., Lakatos, 1970;Popper, 1934). In this section, we explain how the old problem of demarcation has been reframed recently and what the emerging debate over the new demarcation problem can learn from this.
The reason for the diminished interest in the old problem of demarcation, as well as the differences between the recent takes compared to the earlier ones, can be traced to Laudan's (1983) article "The demise of the demarcation problem". He argued that so far, all attempts to come up with criteria that would reliably demarcate between science and pseudoscience had failed, and understandably so. Science simply isn't so uniform as to make the existence of such criteria at all plausible. Many philosophers of science found the argument convincing, leading to a decline in interest in the old problem of demarcation.
The force of Laudan's argument, however, rested on the rigorous requirements he set for a satisfactory solution to the old problem of demarcation. As many critics (e.g., Mahner, 2013;Pigliucci, 2013;Thagard, 1988) have noted, they were too stringent.
Laudan demanded that a satisfactory solution would have to (1) "identify the epistemic or methodological features which mark off scientific beliefs from unscientific ones", (2) "be an adequate explication of our ordinary ways of partitioning science from non-science", and (3) "specify a set of individually necessary and jointly sufficient conditions for deciding whether an activity or set of statements is scientific or unscientific" (Laudan, 1983, p. 118). He then quite reasonably concluded that the "evident epistemic heterogeneity of the activities and beliefs customarily regarded as scientific should alert us to the probable futility of seeking an epistemic version of a demarcation criterion" (Laudan, 1983, p. 124). There is no criterion or demarcation strategy that would work at all times and in all academic fields.
But if we question some of these requirements, the problem of distinguishing between science and pseudoscience appears in a new light. The critics claim that to reliably demarcate between science and pseudoscience, we do not need such a panacea of a solution. What is of interest for us here, are two general approaches that can be used to address the problem of demarcation. Both of them loosen the requirements Laudan suggested, but in different ways. These two approaches can provide us with fruitful ideas when attempting to address the new problem of demarcation.
The first approach starts with the observation that science is a "family resemblance" concept (Dupr e, 1993;John, 2021;Pigliucci, 2013): it has copious identifying features, but no single feature needs to be instantiated for the concept to apply, and none of the features is sufficient by itself. This is in line with Laudan's observation that sciences are epistemically heterogeneous, but does not need to lead to his pessimistic conclusion. One can give up Laudan's third criterion, according to which a satisfactory solution to the problem of demarcation should specify a set of individually necessary and jointly sufficient conditions for deciding whether something is scientific or not. This move has led several philosophers of science to suggest open-ended lists of the typical features of science. Satisfying just one feature in an open-ended list would not yet be sufficient for an instance to qualify as science, but satisfying many of them would lead to the conclusion that the instance qualifies as science. Philosophers do not claim that any list would or even could be comprehensive, but they argue that with such lists we can quite reliably demarcate between science and pseudoscience (e.g., Thagard, 1988;Pigliucci, 2013;Mahner, 2013;see also;Resnik, 2000). On such lists we find many familiar criteria such as Popper's observation that scientists, unlike pseudoscientists, typically attempt to refute their hypotheses (Popper, 1934). But we also find questions such as "Is there an extensive mutual exchange of information, or is there just an authority figure passing on his doctrines to his followers?" (Mahner, 2013, p. 38.) The second approach starts with the observation that a merely descriptive definition of pseudoscience can suffice to clarify how it differs from science. In line with this observation, Sven Ove Hansson (2009; defines pseudoscience in the following way: A statement is pseudoscientific if and only if it satisfies the following three criteria: 1. It pertains to an issue within the domains of science in the broad sense (the criterion of scientific domain). 2. It suffers from such a severe lack of reliability that it cannot at all be trusted (the criterion of unreliability). 3. It is part of a doctrine whose major proponents try to create the impression that it represents the most reliable knowledge on its subject matter (the criterion of deviant doctrine) (Hansson, 2013, pp. 70-71.).
Hansson's definition is descriptive because it does not specify in what ways pseudoscience is unreliable. Many pseudoscientific statements are likely to be (and we know them to be) unreliable in similar ways: for instance, pseudoscientists do not attempt to test and refute their hypotheses. However, it is possible that some of them are unreliable in unique and surprising ways. Therefore, a general definition of pseudoscience can only be descriptive. The component with normative force needs to be identified separately in each context; in each case, we must specify precisely what it is that makes pseudoscientific statements unreliable. On a closer analysis, if we follow Hansson's definition, pseudoscience may turn out to be a "cluster concept": there are many features which are sufficient by themselves to identify an instance of pseudoscience as unreliable, but no necessary criterion that all unreliable instances have in common.
By proposing a descriptive definition of pseudoscience, Hansson gives up Laudan's first criterion, according to which a satisfactory solution to the problem of demarcation must identify the epistemic or methodological features that discriminate between science and pseudoscience. As the epistemic or methodological failures of pseudoscience can come in a variety of forms, it is futile to attempt to capture them in a general normative definition of pseudoscience.
In sum, Laudan's way of framing the old demarcation problem has been challenged in two ways. There are good reasons to reject both Laudan's first and third requirement for an adequate solution to the old problem of demarcation. This leaves us with the second requirement. The solution should "be an adequate explication of our ordinary ways of partitioning science from non-science" (Laudan, 1983, p. 118). Such "adequate explications" are likely to be different in different contexts.
Like the critics of Laudan's view, we also believe that these two approachesone, compiling an open-ended list of the typical features of science, and two, clarifying case by case why pseudoscience is unreliable give us efficient strategies for distinguishing between science and pseudoscience. There are many typical features that can be compellingly cited when we want to argue that something is science. For instance, in science hypotheses are typically tested rigorously (Popper, 1934), and regressive research programmes are eventually abandoned (Lakatos, 1970). And when the task is to demonstrate that something is pseudoscience, a detailed study of the case should clarify why it suffers from such a severe lack of reliability that it cannot be trusted.
Can we adopt a similar approach when attempting to distinguish between acceptable and unacceptable roles non-epistemic values play in science? Can we use something like either of the approaches described aboveor perhaps a combination of the twowhen addressing the new problem of demarcation? We believe the answer to be affirmative. The old and the new demarcation problems differ in many ways, but there are enough similarities for the comparison to be fruitful. To tackle the new demarcation problem, we can combine the strategy of compiling an openended list of criteria with the case-by-case strategy of identifying problematic instances of value influence.
Thus, we suggest that the new problem of demarcation should be framed as the problem of (1) preparing an open-ended list of principles for distinguishing between legitimate and illegitimate roles for nonepistemic values in science, and (2) developing context-sensitive ways of solving unclear cases. Often it is relatively easy to distinguish between legitimate and illegitimate roles by using well-known principles. In some cases it can be tricky, and we leave open the possibility that sometimes it can even require the identification of new principlesprinciples which can then be added to the open-ended list.
We will proceed by first examining existing strategies for demarcating between acceptable and unacceptable roles that non-epistemic values can play in science. Then we will focus on certain contexts where the task of demarcating between legitimate and illegitimate value influences is not easy: transdisciplinary collaborations across the boundaries of science, where extra-academic research partners are meant to bring their knowledge, interests, and values into the research process. We will argue that while in such contexts demarcating between legitimate and illegitimate value influences is not straightforward, it is possible to develop practices that help in the identification and application of the principles that should be used in the demarcation.

Open-ended strategy to the legitimate/illegitimate distinction
In the previous section, we have introduced a modest and feasible alternative to the demarcation strategy that aims to define a set of necessary and jointly sufficient conditions for legitimate roles that nonepistemic values can play in scientific research. Drawing on the lessons we have learned from the literature on the old demarcation problem, we suggest a dual strategy. The first part of the strategy is to collect an openended list of principles that can be used to distinguish legitimate from illegitimate roles for values in scientific inquiry. Such a list does not need to provide a solution to every case; it is enough for it to offer solutions for most cases in which we wish to draw the line between acceptable and unacceptable value influences. The second part of the strategy tells us what we should do when we run into a hard case of telling the difference between appropriate and inappropriate roles that values can play in scientific inquiry. The second part of the strategy is needed only when the first part is not helpful. The second part aims to provide a deeper understanding of the context, in order to find out whether some of the principles already on the open-ended list of principles can be usefully applied, and if not, to articulate and defend a new principle that can be added to the list. In our open-ended approach, the first and the second part of the strategy work together so that the first part provides an open-ended list of demarcation criteria, and the second part tells us how the list can be used in a specific context, and possibly revised by adding a new criterion.
In this section, we clarify the first part of the strategy by providing an open-ended list of principles. Our focus will be on the proper roles of nonepistemic values in science rather than on the question of which nonepistemic values should play the proper roles and how scientists are to identify such values (for discussion of the latter question, see Intemann, 2015;Kitcher, 2011;Kourany, 2010;Rolin, 2020). We argue that an open-ended list should include at least three principles that are often used to identify legitimate and illegitimate roles for values in science. The principles are based on three influential arguments against the ideal of value-free science: an argument from the plurality of epistemic values; an argument from inductive risk; and an argument from value-laden background assumptions. We analyze how these arguments challenge the value-free ideal, the view that non-epistemic values are not allowed to play any role in the practices whereby scientific knowledge claims are evaluated and justified, and ultimately accepted and communicated to others. We aim to understand the positive outcomes philosophers wish to promote and the harms they want to avoid by distinguishing among legitimate and illegitimate roles for non-epistemic values in science. We are open to the possibility that both the desirable outcomes and the harms to be eliminated or minimized have an epistemic as well as a moral and social dimension. We also discuss the question of what the cost of failing to distinguish between legitimate and illegitimate roles might be in each case.
Before we start the analysis, let us clarify how we understand the distinction between epistemic and non-epistemic values. As Steel (2010) defines epistemic values, they are values that promote the attainment of truth, either intrinsically or extrinsically. An epistemic value is intrinsic when manifesting that value constitutes an attainment of or is necessary for truth, and it is extrinsic when it promotes the attainment of truth without itself being an indicator or a requirement of truth (Steel, 2010, p. 15). This definition of epistemic values is consistent with the view that there is a borderland area between epistemic and non-epistemic values (Rooney, 2017). The borderland area arises because it is not always easy to determine whether some values are extrinsic epistemic values, the kind of values that lead scientists towards truth under current circumstances. The difficulty of deciding whether a value is an extrinsic epistemic value follows from the difficulty of foreseeing the consequences of applying that value in scientific practices. Even if the distinction between epistemic and non-epistemic values is not sharp, we can nevertheless adopt the distinction for a specific purpose. We can draw the line between epistemic and non-epistemic values in a somewhat similar way as most societies draw the line between children and adults (e.g., for the purpose of voting in elections) even when they agree that there is a borderland area between children and adults (Rooney, 2017, p. 32). In this paper, the purpose of making the distinction between epistemic and non-epistemic values is to have a better understanding of the roles that moral, social, and political values can play in scientific inquiry as much of the controversy over values in science focuses on these types of values.

Principle I: Non-epistemic values guiding the use of epistemic values
The first principle says that it is legitimate for non-epistemic values to guide the use of epistemic values, while it is illegitimate for them to override epistemic values when scientists evaluate and justify hypotheses and theories. The first part of the principle says that non-epistemic values can legitimately guide scientists' use of epistemic values in two ways, either by providing them with a reason to give more weight to some epistemic values than to some others, or by leading them to interpret an epistemic value in one way rather than another. The second part of the principle says that scientific theories or hypotheses should not be evaluated and justified on the basis of non-epistemic values alone. While epistemic and non-epistemic values do not necessarily conflict, in some situations they may do so, and if non-epistemic values replaced epistemic ones, this would corrupt scientific research (Steel, 2017).
The first principle is supported by the argument from the plurality of epistemic values. The argument starts with the observation that epistemic values comprise a diverse set of criteria and desiderata for scientific hypotheses and theories. While scientific theories are expected to include true statements, such statements should also be significant, form a coherent whole, and offer explanation and understanding by answering a variety of why or how questions. Some criteria are thought to be necessary conditions of a theory being epistemically good (e.g., empirical adequacy, consistency), while some others are virtues a theory can realize to various degrees (e.g., accuracy, broad scope, explanatory power, fruitfulness, and simplicity). As Kuhn (1977, pp. 320-339) has argued, not all theoretical virtues can be maximized simultaneously. For example, scientists may have to strike a balance between accuracy and broad scope, between emphasizing the depth of empirical evidence or its breadth. Even necessary criteria such as empirical adequacy require further interpretation before they can be used to evaluate and justify theories. Scientists must agree on the question of which body of evidence is relevant when they assess the empirical adequacy of a theory. Similarly, they must agree on the question of which body of knowledge is relevant for judging whether a theory is consistent with that body of knowledge in addition to being internally consistent. The plurality of epistemic values has given many philosophers a reason to believe that non-epistemic values can play legitimate roles in decisions concerning which epistemic values are emphasized and how they are interpreted when scientists evaluate and justify hypotheses and theories (e.g., Elliott, 2013;Elliott & McKaughan, 2014;Kitcher, 1993;Kuhn, 1977, pp. 320-339;Longino, 1995Longino, , 2002Solomon, 2001). For this reason, they argue, the value-free ideal is not feasible. I. Koskinen, K. Rolin Studies in History and Philosophy of Science 91 (2022) 191-198 We argue that the first principle serves three purposes. The first purpose is to protect the epistemic integrity of scientific research by suggesting that the acceptability of scientific theories or hypotheses should always be evaluated and justified on the basis of epistemic values. The principle does not imply that non-epistemic values cannot play any role in practices whereby scientists evaluate and justify hypotheses and theories. It says merely that non-epistemic values should not take over from epistemic values the work of evaluating the acceptability of hypotheses and theories. The second purpose of the distinction between legitimate and illegitimate roles is to ensure that scientific research can be expected to serve many different but nevertheless equally justified moral and social goals. To serve this purpose, non-epistemic values are allowed to guide the use of epistemic values. For example, in many areas of social scientific research and humanities scholarship, the object of inquiry is a morally and politically value-laden phenomenon independently of scientists' judgment (e.g., discrimination, poverty, well-being). In such cases, scientists can hardly avoid making moral value judgments when they decide how to conceptualize the object of inquiry. Consequently, moral and social values are necessary to guide the interpretation of epistemic values such as empirical adequacy. The third purpose of the distinction between legitimate and illegitimate roles is to ensure that there is a distribution of research efforts in scientific communities among those theories that have some empirical successes. Such a distribution is arguably epistemically beneficial for the long-term development of science (Kitcher, 1993;Solomon, 2001). When non-epistemic values guide the use of epistemic values without replacing them, they are thought to be epistemically harmless or even beneficial, especially when they contribute to a distribution of research efforts in scientific communities.
In sum, the distinction between legitimate and illegitimate roles for non-epistemic values is introduced to protect the epistemic integrity of scientific research, while ensuring at the same time that scientific research serves valued moral and social goals and that there is epistemically beneficial theoretical diversity in scientific communities. The first principle suggests that science should promote valued moral and social aims by advancing scientific knowledge, not by other means (Steel, 2017, p. 50). The cost of failing to make the distinction would be that the epistemic integrity of scientific research is undermined, its social relevance is compromised, and resources are wasted on research projects that do not benefit society as a whole or some part of it.

Principle II: Non-epistemic values playing an indirect role in scientific reasoning
The second principle says that it is legitimate for non-epistemic values to play an indirect role in scientific reasoning, while it is illegitimate for them to play a direct one. Non-epistemic values play an indirect role when they act as reasons to accept a certain level of uncertainty in scientific reasoning. They would play a direct role if they acted as reasons themselves to accept a hypothesis (Douglas, 2009, p. 96).
The second principle is supported by the argument from inductive risk. The argument starts with the observation that in most cases of empirical research, some degree of uncertainty is involved in a decision to accept or reject a hypothesis. This is partly because interesting hypotheses typically extend our knowledge beyond what evidence shows. Given that acceptance involves uncertainty, a scientist must decide whether the evidence at hand is sufficiently strong to warrant acceptance. Several philosophers have argued that this decision depends on the risks involved in accepting a false hypothesis or rejecting a true one (Biddle, 2013;Brown, 2013;Douglas, 2000Douglas, , 2009Steel, 2010;Wilholt, 2009). In many cases, identifying and assessing the severity of the risks requires a moral value judgment. Scientists are expected to ask who will be impacted by a mistake in their research and whether the consequences will be so harmful that they need to be avoided. For this reason, it is important to hold on to the principle that non-epistemic values, especially moral ones, have a legitimate role to play in the evaluation of risks involved in accepting or rejecting hypotheses and communicating these judgments to others. The value-free ideal is hardly feasible when scientists are expected to make a moral assessment of inductive risks.
The argument from inductive risk explains why it is legitimate for non-epistemic values to play an indirect role in scientific reasoning. This is the first part of the second principle. We still need to understand why it is illegitimate for non-epistemic values to play a direct role. This is the second part of the second principle. As Douglas argues, a direct role is illegitimate because if non-epistemic values played such a role, they would basically play the same role as evidence, thereby threatening the epistemic integrity of scientific research (2009, p. 156). Thus, the direct role is forbidden for the same reason it is forbidden to fabricate or falsify data.
We argue that the second principle serves two purposes. As in the case of the first principle, one purpose is to protect the epistemic integrity of scientific research. While the first principle affirms the central role of epistemic values in science, the second principle emphasizes the importance of protecting the integrity of empirical evidence. Another purpose of the second principle is to eliminate or minimize harms that are caused by scientists making mistakes, either the mistake of accepting a false hypothesis or that of rejecting a true one. The second principle does so by proposing that moral and social values can legitimately inform scientists' decisions concerning an acceptable level of uncertainty. When scientists cannot avoid making moral value judgments, making morally justifiable value judgments is the responsible thing to do (Rolin, 2016, p. 136). The cost of failing to anticipate harms and take action to avoid them can be severe from a moral point of view.

Principle III: Non-epistemic values informing background assumptions in evidential reasoning
The third principle says that it is legitimate for non-epistemic values to play a role in determining which background assumptions are used in evidential reasoning, while it is illegitimate for them to play the same role as evidence does in evidential reasoning. As the second part of the third principle is the same as the second part of the second principle, the justification for it is also the same. Non-epistemic values should not play the same role as evidence because in that role they would undermine the epistemic integrity of scientific research (Douglas, 2009, p. 156).
The first part of the third principle derives its justification from the argument from value-laden background assumptions. The argument starts with the observation that evidential reasoning takes place in a context of background assumptions that are necessary to establish the relevance of empirical evidence to a hypothesis or a theory (Longino, 1990, pp. 43-44; see also Longino, 2002, p. 127). When evidence does not point towards a hypothesis or a theory, scientists need to appeal to a background assumption (or a background theory) to explain why a particular body of evidence is relevant to the hypothesis or the theory. Moreover, the established body of scientific knowledge may provide scientists with several potentially plausible background assumptions or theories, and hence scientists can make choices concerning which background assumptions or theories they rely on in their own evidential reasoning. As such choices may be morally and socially value-laden, value-laden scientific research should not be judged to be necessarily "bad" as science, because it is difficult to see how evidential reasoning could proceed without background assumptions or theories (Longino, 1990, p. 128 and 216). For this reason, it is reasonable to accept the first part of the third principle, which states that non-epistemic values can legitimately play a role in determining which background assumptions or theories are used in evidential reasoning. This view also makes it impossible to hold on to the value-free ideal.
We argue that the third principle serves two purposes. One purpose is to protect the epistemic integrity of scientific research, especially the integrity of empirical evidence. Another purpose is to advance scientific objectivity. As Longino explains, whether value-laden choices about background assumptions or theories are acceptable should be decided by a scientific community where the background assumptions and theories I. Koskinen, K. Rolin Studies in History and Philosophy of Science 91 (2022) 191-198 are critically evaluated and either defended, modified, or rejected in response to criticism (Longino, 1990, pp. 73-74). The objectivity of scientific knowledge is increased by critically debating value-laden choices in scientific communities, not by denying that scientists make such choices. Along the same lines, Anderson argues that the value-laden nature of choices concerning background assumptions or theories is not a problem in and by itself; it becomes a problem when it gives rise to dogmatism (2004, p. 3). For this reason, we need to ensure that such choices are justified. As Anderson states, it is illegitimate for non-epistemic values to drive inquiry to a predetermined conclusion (2004, p. 1). The cost of failing to make the distinction between legitimate and illegitimate roles is that the epistemic integrity of scientific research is undermined, dogmatism prevails, and the progress of science towards greater objectivity is impeded. Longino (1990Longino ( , 2002) introduces yet another interesting dimension to debate on the distinction between legitimate and illegitimate roles for non-epistemic values in science. When individual scientists fail to draw the line in a non-controversial way, the judgment will have to be made collectively in a scientific community. The collective judgment should not be arbitrary. Instead, it should be an outcome of a discussion complying with the norms of epistemically well-functioning communities. According to Longino, such norms require that scientific communities maintain public arenas for critical exchanges, ensure that there is uptake of criticism and shared standards that make appropriate criticism and responses to criticism possible, and respect equality of intellectual authority (2002, pp. 129-134).
To summarize, we have analyzed three principles that should be included in the open-ended list of demarcation criteria. The three principles aim to promote the epistemic integrity of scientific research and the objectivity of scientific knowledge, while at the same time guaranteeing that scientific research serves valued moral and social ends and that risk assessment is based on sound moral judgments. It is important to keep in mind that the three principles do not comprise a set of necessary and sufficient conditions for non-epistemic values playing either a legitimate or an illegitimate role in science. They are better thought of as a toolkit that helps one draw the line between legitimate and illegitimate roles in some cases. 1 Just one tool may be sufficient to tell the difference between a legitimate and illegitimate instance of non-epistemic value influence in science. If any one of the principles on the list does not suffice to distinguish between legitimate and illegitimate roles for non-epistemic values in science, then the toolkit provides yet another resource. As Longino (1990Longino ( , 2002 suggests, questions about the proper role of values in science can be resolved in epistemically well-functioning scientific communities.
In the next section, we turn to the question of how exactly the hard cases of distinguishing between legitimate and illegitimate roles should be solved in scientific communities. We argue that the second part of the open-ended strategy is needed to tell us how the above mentioned list can be applied and possibly extended.

Transdisciplinary research: from the new demarcation problem to best practices
We have suggested, in section 2, that demarcation between legitimate and illegitimate value influences in science can be done by (1) preparing an open-ended list of principles for distinguishing between legitimate and illegitimate roles for non-epistemic values in science, and (2) developing context-sensitive ways to solve unclear cases. The latter may lead to the development of some new criteria that are then to be added to the open-ended list.
In section 3, we have described what the list currently looks like. Next, let us turn to the second, "grounded" part of our open-ended strategy, the development of context-sensitive practices that help determine whether the well-known principles suffice when attempting to demarcate between legitimate and illegitimate value influences, or whether some new, context-specific principles might be needed. To do this, we will now examine a context where demarcating between acceptable and unacceptable value influences is particularly difficult: transdisciplinary collaborations across the boundaries of science.
The term "transdisciplinarity" has many partly overlapping meanings, and several other contemporary approaches, such as participatory action research or co-research, have aims that resemble the ones in which we are interested here. Let us therefore roughly characterise what we mean by transdisciplinarity.
Pohl (2011; see also Carew & Wickson, 2010) has identified four central features of transdisciplinarity: the search for a unity of knowledge, a focus on socially relevant issues, transcending and integrating disciplinary paradigms, and the inclusion of extra-academic partners in the research process. In other words, transdisciplinarity is solution-oriented, typically project-based research where (1) the problems to be solved are not framed in disciplinary terms, but rather in cross-disciplinary, even extra-academic terms, and (2) finding a satisfactory solution is taken to require the contributions of both researchers, possibly from many fields, and extra-academic partners (Brown et al., 2010;Hirsch Hadorn et al., 2008;Koskinen & M€ aki, 2016;Pohl, 2008). The core idea is that finding solutions to pressing societal or environmental problems requires integrating knowledge from different sources. Even an adequate description of such problems may not be possible without taking into account the viewpoints of all relevant disciplines and all relevant stakeholder groups. An ideal transdisciplinary project would therefore reach high levels of "integration, assimilation, incorporation, unification and harmony of disciplines, views and approaches" (Choi & Pak, 2006, p. 356). While transdisciplinary projects are solution-oriented and focus on societally relevant, urgent problems, they cannot be defined as applied research, as the latter uses already existing scientific knowledge to solve practical problems. Transdisciplinary projects, by contrast, are supposed to produce new knowledge (Adler et al., 2018;Hirsch Hadorn et al., 2008;Leavy, 2011;Pohl & Hirsch Hadorn, 2007).
We argue that transdisciplinary research collaborations give rise to a risk of non-epistemic values playing inappropriate roles in research. It is not only the active participation of stakeholder representatives that may lead to unacceptable value influence. A potential source of problems is also that extra-academic partners typically play a dual role in transdisciplinary projects: they are both experts and stakeholders.
A core idea of transdisciplinarity is to combine scientific expertise with other kinds of expertise, such as expertise based on life experience (e.g., lay expertise) or professional experience (e.g., clinical expertise). Thus, extra-academic partners are expected to bring their knowledge and perspectives into the research collaboration. In some cases, transdisciplinary collaborations across the boundaries of science can even resemble interdisciplinary collaboration; extra-academic experts are expected to have knowledge about some of the practical aspects of the problem being studiedtheir expertise usually lies in the use of the produced knowledge rather than its production (Hirsch Hadorn et al., 2008;Hirsch Hadorn et al., 2010).
But in addition to bringing their knowledge to the research collaboration, extra-academic partners are often also representatives of a stakeholder group with distinct interests and values at stake. In transdisciplinary research projects, such interests and values are both welcomed and expected to inform the research, as this will increase the likelihood of the project producing solutions that can actually be implemented (Mobj€ ork, 2010;Solomon, 2009).
The latter role is in line with an argument which has been presented many timesin slightly varying formsin the recent philosophical discussions about non-epistemic values in science. The starting point of this argument is the rejection of the value-free ideal, the view that non-1 As we have noted, the list is open-ended. The toolkit we offer here is helpful when we focus on the roles moral, social, and political values can play in science. Criteria that are currently missing from the list but might prove necessary in some contexts include, for instance, ones that would help us distinguish between acceptable and unacceptable roles aesthetic values can play in science. I. Koskinen, K. Rolin Studies in History and Philosophy of Science 91 (2022) 191-198 epistemic values are not allowed to play any role in the practices whereby scientific knowledge claims are evaluated and justified, and ultimately accepted and communicated to others. The ideal is impossible to reach (Dupr e, 2007;Longino, 1990Longino, , 2002, and, as many philosophers have argued, researchers must in fact throughout the research process make assessments that necessitate value judgments (Rudner, 1953;Douglas, 2000Douglas, , 2009. This, however, creates a problem: it is undemocratic for scientists to make such value judgments by themselves. Abandoning the value-free ideal has therefore led many philosophers to suggest that extra-academic partners should be engaged in scientific knowledge production, and that they should actively take part in the value decisions that cannot be avoided (Alexandrova, 2017;Brown, 2009;Douglas, 2005Douglas, , 2009Elliott, 2011;Havstad & Brown, 2017;Kitcher, 2001Kitcher, , 2011. While we welcome the recent debate on the role of stakeholders in scientific research, we argue that the dual role of extra-academic partners as both experts and stakeholder representatives is a risk. In transdisciplinary research projects, there is a clear risk of illegitimate value influences. This is because it is not self-evident that the extra-academic partnersor the scientists with whom they are collaboratingare able to distinguish between the different roles. Is it clear when the extraacademic partners are offering knowledge and expertise, and when their interests or value-laden views (Solomon, 2009)? If it is not clear, stakeholder values might end up in a role that could threaten the epistemic integrity of scientific research in ways described in the previous section: the acceptability of theories or hypotheses might end up being evaluated and justified on the basis of non-epistemic values alone, or non-epistemic values may end up playing the role of evidence, driving inquiry to a predetermined conclusion.
Obviously, the difficulty of discerning whether the extra-academic partners' dual role leads to such problems is not the only challenge related to the role of non-epistemic values in transdisciplinary collaborations. Extra-academic partners may in some situations be unwilling to act in epistemically responsible waysfor instance, as has been observed in industry-funded research, they may wish to prevent the publication of results that go against their interests (Resnik, 2007). However, the difficulty of distinguishing between the roles of extra-academic partners as experts and as stakeholders is a particularly tricky one and cannot be solved by simple measures such as agreements entered into at the beginning of the project. This is because while the two roles are conceptually distinct, in practice they are often indistinguishable: the expertise is so tightly interwoven with the values and interests that it would be hopelessly artificial to attempt to separate the two (Collins & Evans, 2002;Solomon, 2009;Mobj€ ork, 2010).
To give an example, we will now examine a funding programme that endorses transdisciplinary research, and problems encountered in two research projects funded by the programme.
Transdisciplinary research is very sought-after today, and many funding instruments are designed in ways that demand a transdisciplinary approach. In Finland, important funding for transdisciplinary research projects is channeled through the Strategic Research Council (SRC). The SRC defines its agenda as wanting to fund high quality research with great societal relevance and impact. It aims to support research projects that seek concrete solutions to "grand challenges" that require multi-and transdisciplinary approaches. To meet the requirement of multidisciplinarity, research consortia must belong to at least three different disciplines. To meet the requirement of transdisciplinarity, they should involve collaboration between those who produce scientific knowledge and those who use it throughout the project.
The Finnish Government decides on the "grand challenges" by defining thematic areas and priorities for each call for applications on the basis of a proposal from the SRC. For example, the thematic areas for 2021 are "Demographic changescauses, consequences and solutions," "Environmental and social links to biodiversity loss," and "Pandemics as a challenge for society"; and for 2022, "Children and young peoplehealthy, thriving and capable makers of the future" and "Security and trust in the age of algorithms". 2 While the government decides the thematic areas and priorities, it is up to the research consortia to decide how they will address the themes in their research projects. In the application, they need to specify the challenges they will focus on, how they will increase understanding of the challenges, and how they will contribute to finding solutions to them. Thus, the SRC funding instrument is an attempt to combine a top-down model with a bottom-up onethereby recognizing the need for solutions to certain pressing problems, and simultaneously agreeing with the core idea of transdisciplinary research: Before the problem can be solved, it needs to be understood in a way that requires researchers to combine diverse perspectives from both inside and outside academia. Reaching such an understanding requires transdisciplinary research collaboration and cannot be accomplished by researchers on their own.
To meet the requirement of transdisciplinarity, a SRC consortium must identify relevant stakeholders and contact them already when they are planning the research project. The interaction plan of a SRC project should tell with whom, why and how the consortium will collaborate and co-create to seek out solutions, and how the interaction partners will benefit from the collaboration. Co-creation is usually understood as a research approach whereby stakeholder representatives are actively involved in all steps of the research process, not just planning the research but possibly also participating in gathering, interpreting, and analyzing (or reanalyzing) research materials, as well as in the dissemination of research results. This, too, is in line with the core idea of transdisciplinarity: the extra-academic partners in a transdisciplinary project are not mere end users or recipients of knowledge, but active partners in its production.
The final reports of already finished SRC funded projects provide us with examples of challenges that researchers can have in such collaborations. To illustrate our argument, we will now briefly focus on two such cases. Let us call them project A and B.
In project A, the research consortium studied a model of action that steers land use, housing, transport, services, and economic development on different levels and sectors of government in Finnish city regions. The aim was to gain a better understanding of interdependencies within the model, and to examine possible pathways towards more sustainable solutions. When discussing collaboration with stakeholders, the researchers note that the processes they studied were, in practice, in the hands of a small number of experts familiar with the practices of regional planning. While such a situation may be unavoidable, as the amount of knowledge needed for effective participation in the planning processes is relatively high, the researchers worry about the situation: "This scarcity of actors, when combined with the exclusivity of the knowledge practices, may mean that the knowledge base of strategic city regional planning is severely diminished. Hence, e.g. the knowledges of the citizens living in the region might not be utilized." (SRC 2016(SRC -2019.
This worry is relevant from more than one viewpoint. The researchers were clearly concerned about the possibility that some important citizen needs and interests would unwittingly end up being ignored in the planning processes, resulting in suboptimal solutions for citizens. From our perspective, a closely related problem is quite clear: if only a very small group of experts can collaborate with the researchers in a transdisciplinary project, and if this group is relatively homogeneous, it can be particularly difficult to separate the values shared by expert group members from their expertise. If there is not enough knowledge about the values and interests of other stakeholder groups, it can be hard, for everyone involved, to evaluate to what extent some preferences expressed by the participating stakeholders are based on their interestssay, ease of implementationand to what extent on their expertise.
In project B, the research consortium analyzed new security challenges to the resilience of Finnish society, focusing particularly on the interwoven nature of external and internal security, and on borders as an interface between domestic concerns and wider global considerations. The research team's collaboration with one of the partners, the Finnish Border Guard, did not go as expectedor rather, the researchers' expectations apparently differed from those of the Border Guard. According to the research plan, which all participants had agreed on, the researchers were to interview some border guards. However, "the Border Guard not only required the […] team to apply for permissions but demanded that interview protocols be checked by them beforehand." (SRC 2016(SRC -2019. This the research team found unacceptable, and in the end, this part of the project was carried out in a different way.
The researchers decided to prevent a situation where the Finnish Border Guard could have threatened the epistemic integrity of the project. Had it decided to ban or alter some of the questions the researchers wanted to ask, it might have endangered the integrity of the collected evidenceeither by actually letting its interests and values shape the evidence and thus take the role of evidence, or by giving a reason to doubt that such a breach has happened. Naturally, it is difficult to tell what the motivations behind the demand that the interview protocols be checked beforehand were. But the researchers apparently felt that they could not have vouched for the integrity of evidence. As we have emphasized, researchers in a transdisciplinary project are not in a position where they can easily discern whether a partner is acting in the role of an extra-academic expert, or in the role of a stakeholder with their own interests at their heart. Therefore, abandoning the interview plans was not a surprising decision.
What, then, would our open-ended strategy suggest? While all the three principles in the open-ended list of demarcation criteria are potentially relevant in transdisciplinary research, the challenge is to understand how they can be implemented in transdisciplinary projects. If the extraacademic participants are all specialised experts who quite possibly share many interests (as in project A), is there a risk that they will unwittingly let their interests influence their contributions in unacceptable ways? Or if the extra-academic experts wish to modify an interview protocol (as in project B), can the researchers know whether this is due to legitimate disagreement about, for instance, which epistemic values are important, or is it an illegitimate attempt to distort evidence?
The open-ended strategy would suggest that we take a closer look at the research design. Transdisciplinary collaborations should be organizedand in fact, they often are organizedin ways that enable researchers to apply one or more of the principles from the open-ended list described in the previous section. As transdisciplinary methods and research practices evolve, so do also practices that ensure the monitoring of various value influences.
As noted in project A, transdisciplinary collaborations would benefit from having diverse stakeholders as partners. Implementing this recommendation can be challenging, but where possible, transdisciplinary research projects should involve extra-academic participants from different stakeholder groups with different interests, because comparing their contributions can help distinguish the role values and interests play in the extra-academic expert knowledge. While the Strategic Research Council (SRC) we have used as an example in this section does not explicitly require this, it is strongly recommended in the literature discussing transdisciplinary research methods and practices (Pohl & Hirsch Hadorn, 2007;Hirsch Hadorn et al. 2008Leavy, 2011).
One of the things the SRC does require is that, whenever possible, the data produced in transdisciplinary projects should be openly available. This too is a good way to alleviate the worry we have, as it increases the possibility that someone will recognize situations where non-epistemic value influences threaten to undermine the epistemic integrity of research. Also, to ensure that the results and the data are presented to larger scientific communities and have the chance of meeting criticism, transdisciplinary collaborations should be based on written agreements that guarantee the dissemination of the data and the publication of results. Unless there are ethical reasons for doing otherwise, the extraacademic partners should not be able to prevent scientists from carrying on with the research, making the data openly available, and publishing their results. While all these principles are endorsed by the SRC, the researchers in project B found the existing practices and guidelines insufficient. As there is much emphasis in the funding instrument on extra-academic cooperation, the researchers suggested that the SRC provide researchers with guidelines for good practices and codes of conduct for problematic situations.
In sum, we have argued that our open-ended strategy is helpful when philosophers are faced with hard cases of distinguishing legitimate from illegitimate non-epistemic value influence in science. Transdisciplinary research collaborations are hard cases precisely because they are meant to integrate scientific knowledge from more than one discipline with experience-based or professional knowledge from various stakeholders. While researchers may be well rehearsed in applying the demarcation principles in the context of their own disciplines, they are likely to find it difficult in the context of other disciplines or in the context of experiencebased or professional expertise provided by the stakeholders. The first part of the open-ended strategy advises researchers to seek help from well-established demarcation principles. The second part of the openended strategy identifies the social organization of the collaboration as the crucial issue. The social organization should be such that scientists are capable of identifying problematic instances of non-epistemic value influence in science. Our tentative suggestion is that transdisciplinary collaborations should aim for diverse stakeholder participation, open data, and written agreements about data sharing and publication plans.

Conclusion
It is misguided to look for necessary and sufficient criteria for demarcating between legitimate and illegitimate value influences in science. While values play many distinct legitimate roles in science, these roles are just that, distinct, and they are legitimate for different reasons. And the presence of just one of the many possible illegitimate roles is enough for us to condemn their role in a scientific study. Therefore, an attempt to find a strategy that could offer necessary and sufficient criteria is unlikely to be successful. This, however, does not prevent us from solving the "the new demarcation problem"the solution just needs to be identified separately in each context where it is needed.
We have suggested an open-ended approach to the new demarcation problem. It consists of an open-ended list of principles that can often be used when demarcating between legitimate and illegitimate value influences in science, and a "grounded" part, where one must find contextsensitive ways to determine whether the already listed principles suffice in that context, or whether some new, context-specific principles might be needed. The latter part of the strategy is needed only in unclear cases: Why is the case unclear? What can be done to amend the situation? When the situation is amended, do the existing criteria suffice for demarcating between acceptable and unacceptable value influences, or do we need to identify new criteria to be added to the open-ended list?
To illustrate our open-ended strategy, we have discussed a context where the distinction between legitimate and illegitimate non-epistemic value influences is both important and challenging: transdisciplinary research collaborations. We have argued that in this context the challenge is not so much to find new criteria for demarcation; rather, the challenge is to understand how such collaborations are best organized so that the already known criteria may do their work.

Funding
This work has been supported by the Academy of Finland grant no. 316695. I. Koskinen, K. Rolin Studies in History and Philosophy of Science 91 (2022) 191-198