1 Introduction

Recent years have witnessed a renewed interest in the concept of scientific objectivity (Alexandrova 2018; Eigi 2017; Koskinen 2017, 2018; Kusch 2017; Ludwig 2017; Padovani et al. 2015). According to a widespread view, scientific objectivity is the kind of feature that gives us a permission to trust scientific knowledge claims. As Heather Douglas explains, when we say that a scientific knowledge claim is objective, we endorse the claim, stating that the claim can be trusted by ourselves and by others (2009, p. 117; see also Grasswick 2010; Scheman 2001).

I examine ramifications of the view that scientific objectivity can ground trust in scientific knowledge claims. When we trust a knowledge claim that p, we rely on p, for instance, by taking p as a premise in our reasoning or by acting on the assumption that p is likely to be true. As Koskinen (2018) argues, scientific objectivity involves a specific kind of reliance. When we claim that something is objective, we suggest that it is reliable in the sense that epistemic risks arising from the imperfections of epistemic agents have been averted effectively. For example, biases due to sexist and racist ideologies are epistemic risks arising from the imperfections of epistemic agents. Many accounts of scientific objectivity identify epistemic risks of this type and strategies for eliminating such risks. One such account is Helen Longino’s critical contextual empiricism, which recommends a collective strategy for managing epistemic risks. According to Longino, scientific knowledge claims are objective to the degree that a relevant scientific community satisfies the four criteria of “public venues,” “uptake of criticism,” “public standards,” and “tempered equality of intellectual authority” (2002, pp. 129–131; see also 1990, pp. 76–81). The four criteria, she argues, increase objectivity because together they facilitate “transformative criticism” (1990, p. 76), which is instrumental in eliminating errors.

While I do not object to this analysis of objectivity, I argue that there is more to objectivity than this. If trust in scientific knowledge claims involves not merely reliance on the claims but also trust in scientists who present the claims (Douglas 2009; Grasswick 2010; Longino 1990; Scheman 2001), then we need to take a closer look at the notions of epistemic trust and trustworthiness. Trust is epistemic when it provides a reason to believe or accept a view, and epistemic trust is rational when it is based on evidence of the trustworthiness of the person whom we trust. In a relation of epistemic trust, a person A trusts another person B to have good reasons to believe that p, and A’s trust in B is a reason for A to believe that p (Hardwig 1991, p. 697). As Wilholt explains, “To invest epistemic trust in someone is to trust her in her capacity as provider of information” (2013, p. 233). Moreover, trusting is the kind of relying that makes us vulnerable and dependent on the goodwill of the trusted person towards us (Baier 1986, p. 251). This means that “The trusting can be betrayed, or at least let down, and not just disappointed” (1986, p. 235). As Annette Baier explains, trusting involves relying on the trusted person’s “discretionary powers” in deciding what needs to be done (1986, p. 237). For example, when a person A places her epistemic trust in B, A relies on B to use her judgment to decide what it takes to provide reliable, relevant, and significant information (see also Grasswick 2010).

The notion of epistemic trust is relevant when a person finds herself in a relation of epistemic dependence. Such a relation can hold between scientists and non-scientists but also between scientists. The latter case occurs, for example, in scientific collaborations when a scientist depends on other members of her team to produce crucial pieces for their joint research paper (Wagenknecht 2015). The former case takes place, for example, when a member of the public depends on scientists to provide her with knowledge that is vital for her to know. In general, a person A is epistemically dependent on another person B when B possesses knowledge that A is dependent on and it is more rational for A to rely on B than to rely on herself, or to spend a significant amount of time and effort to acquire and to understand the evidence B has (Hardwig 1985). Unless A wishes to stay ignorant, she can try to manage the relation of epistemic dependence by considering whether epistemic trust in B is a rational way of grounding her belief. However, if A has access to the evidence that B possesses and the expertise necessary for analyzing the evidence on her own, she does not need to trust B. Epistemic trust is redundant when A can do the epistemic assessment on her own without depending on B.

When we find ourselves in a relation of epistemic dependence, trust in scientific knowledge claims involves not merely reliance on the claims but also trust in scientists who present the claims. While philosophers have considered various accounts of trustworthiness (Simon forthcoming), hardly anyone denies that scientists’ trustworthiness depends on their expertise, honesty and social responsibility (Anderson 2011; Goldman 2001; Hardwig 1991; Kourany 2010). The requirement of expertise means that when we trust scientists as providers of information, we believe them to possess a relatively high level of knowledge in a particular domain, an ability to deploy this knowledge in answering questions, and an ability to generate new knowledge (Goldman 2001, p. 91). The requirement of honesty means that we believe them to be honest in communicating their views to us (Anderson 2011; Hardwig 1991). While there is disagreement over what the requirement of social responsibility means, in Sect. 2 I will defend the view that social responsibility requires scientists to follow “sound” moral and social values in different stages of scientific inquiry (Kourany 2010, p. 106). The trusted person needs to meet the requirement of honesty and social responsibility to demonstrate goodwill towards those who are epistemically dependent on her (Grasswick 2010, p. 406).

Especially those philosophers, who are critical of the value-free ideal of science, have embraced the view that social responsibility requires scientists to follow “sound” moral and social values (Kourany 2010, p. 106; see also Alexandrova 2018; Brown 2013; Intemann 2015; Rolin 2012). They believe that epistemic trust in scientists involves trust in their capacity to make sound moral and social value judgments in research (Wilholt 2013, p. 248; see also Brown 2013; Frost-Arnold 2013; Hardwig 1991; Kourany 2010; Rolin 2016a). The critics of the value-free ideal argue that moral and social value judgments are unavoidable when scientists use morally or socially value-laden concepts (Alexandrova 2018), or when they make judgments concerning morally and socially acceptable inductive risk (Douglas 2009). Insofar as scientific knowledge is not free from moral and social values, to trust a scientific knowledge claim is in part to trust that it is based on appropriate moral and social value judgments (Alexandrova 2018, p. 436). Trust in scientists’ capacity to make such value judgments is especially important when scientists function as experts in society. Members of the public and policymakers do not want to rely on scientific research shaped by moral and social values they have good reasons to reject.

It is not my aim to pursue further the question of how trustworthiness is to be understood. The above analysis suffices for identifying an aspect of trustworthiness that is understudied, social responsibility. Insofar as objectivity plays a role in legitimizing epistemic trust, we have a reason to believe that an account of objectivity should involve also an account of socially responsible science.

In line with the special issue topic “Objectivity in Social Research,” I examine what socially responsible science amounts to especially in social research. The challenge is to understand what it means for social scientists to follow appropriate moral and social values in different stages of scientific inquiry. The underlying assumption is that social scientists have a reasonable degree of autonomy to make decisions concerning, for example, what they study and how they study it. By a reasonable degree of autonomy, I mean autonomy that is constrained by the requirements of research ethics and available research funding (see also Hicks 2011). Within these constraints, it is up to social scientists to decide which moral and social values guide their research. In order to earn and maintain trustworthiness, social scientists should make these decisions in a socially responsible way. Autonomy and social responsibility do not cancel each other out. Quite the contrary, autonomy is a precondition for social responsibility. Social responsibility helps social scientists protect autonomy.

At least two questions wait for further exploration. One question is what roles moral and social values can legitimately play in different stages of scientific inquiry. I call this the Proper Roles Question. Another question is how social scientists can identify appropriate moral and social values, the values that should play the proper roles in scientific inquiry. I call this the Proper Values Question. The Proper Values Question is raised within the framework of liberal pluralism, assuming that there is room for reasonable disagreement over moral and social values as long as the values do not jeopardize liberal democracy (see also Van Bouwel 2009).

In response to the Proper Values Question, I argue that procedural accounts of social responsibility cannot guarantee that scientists receive adequate information about appropriate moral and social values. Philosophers’ attempts to define and defend an ideal procedure of pooling information about citizens’ value perspectives miss an important aspect of well-functioning liberal democratic societies: the on-going struggle to make visible the social experiences of subordinate or marginal social groups. Social experiences play an important role in deliberative democracy because they are sources of moral and social value judgments that can enter into the process of deliberation about policy issues (Bohman 2006). Purely procedural accounts of social responsibility will be incomplete as long as the available pool of social experiences and value perspectives is incomplete.

I start my inquiry into socially responsible science by reviewing different answers to the Proper Roles Question in Sect. 2. The review is necessary in order to understand the challenges we face when we address the Proper Values Question. In Sect. 3, I discuss three different approaches to the Proper Values Question, an expert-driven, a market-driven, and a government-driven approach. I explain why they are unsatisfactory. In Sect. 4, I discuss two procedural accounts of social responsibility, well-ordered science (Kitcher 2001, 2011) and deliberative polling (Alexandrova 2018; Fishkin 2009). I show that the two accounts have limitations and need to be supplemented with other strategies. In Sect. 5, I propose one strategy to increase the social responsibility of scientific research. I argue that scientific/intellectual movements have an important role to play in this strategy.

2 The proper roles question

In response to the Proper Roles Question, many philosophers would argue that moral and social values justify the imposition of constraints on scientific inquiry by ordering scientists not to harm human subjects. Scientists have an obligation to protect the privacy of human subjects, and the latter should be free to decide whether they give their informed consent to any specific research project. Moreover, moral and social values play an irreducible role in decisions concerning what topics are worthy of research and for what practical purposes scientific knowledge is sought. For example, Philip Kitcher (1993) argues that moral and social values play a role in decisions concerning significance in scientific research. According to Kitcher, the goal of scientific inquiry is significant truth rather than plain truth, and consequently, the goals of scientific inquiry are as many as there are views about significance in science (1993, p. 94). In his view, significance reflects the non-epistemic concerns of the age even when such concerns appear to be internal of science (Kitcher 2001, p. 82).

In this section, I argue that besides playing a role in agenda-setting, moral and social values can enter into the core of scientific inquiry, thereby undermining the ideal of value-free science. The value-free ideal is the view that moral and social values should not play any roles in the core of scientific inquiry, in which scientific knowledge claims are epistemically justified or criticized, and ultimately, either accepted or rejected. I review four arguments against the value-free ideal: an argument from inductive risk, an argument from pluralism (with respect to epistemic values), an argument from normative background assumptions, and an argument from qualitative data collection. Each argument helps understand how moral and social value judgments can be part of social research and why such value judgments should be made in a socially responsible way. Most importantly, the arguments show that socially responsible science should not be equated with value-free science.

2.1 Argument from inductive risk

According to the inductive risk argument, the value-free ideal is not feasible because moral and social values have a legitimate role to play in the evaluation of risks involved in accepting scientific knowledge claims (Douglas 2009). Since accepting knowledge claims involves uncertainty, scientists have to decide when the evidence at hand is sufficiently strong to warrant acceptance. This decision depends on the consequences of accepting knowledge claims. If a scientist accepts a false hypothesis, there may be a cost associated with this type of error. If she rejects a true hypothesis, there may be another cost associated with the other type of error. The assessment of the costs involved in these two mistakes is a matter of moral value judgment.

According to Douglas (2009), the problem with the value-free ideal is that the ideal encourages scientists to ignore their moral responsibility. Like most human beings, scientists are morally responsible for their actions and the foreseeable consequences of their actions (2009, p. 67). This means that we can praise or blame scientists for their actions. Blame is an appropriate attitude when scientists are morally responsible for harms caused by their making overly strong knowledge claims and downplaying the risk of error (2009, p. 87). When scientists do not make moral value judgments concerning acceptable risk, they can be blamed for being reckless or negligent (2009, p. 81).

In response to the Proper Roles Question, Douglas introduces a distinction between a direct and indirect role. Values play a direct role when they act as reasons in themselves to accept a knowledge claim and an indirect role when they act as reasons to accept a certain level of uncertainty (2009, p. 96). While moral and social values should not play a direct role, they can legitimately play an indirect one. A direct role is not acceptable because it means that non-epistemic values would play the same role as evidence does. As Douglas explains, “values are not evidence, and should not take the place of evidence in our reasoning” (2009, p. 156). An indirect role is acceptable because scientists are morally responsible for their knowledge claims and the predictable consequences of making such claims (2009, p. 106). While some philosophers argue that the distinction between direct and indirect roles for values in science stands in need of further clarification (Elliott 2011), Douglas’s arguments are often taken to show that moral and social values should play a role in decisions concerning when evidence is sufficient.

In sum, the inductive risk argument gives rise to the question of which moral and social values should guide the evaluation of risks involved in error. As de Melo-Martín and Intemann (2016) argue, it does not follow from the inductive risk argument that scientists should evaluate risks on their own. Nor does it follow that scientists should evaluate risks on the basis of their own moral and social values (see also Elliott 2017). Scientists, like other human beings, can cause harm without intending to do so because they do not understand what others perceive as harmful. This is why it is an open question in what Kusch (2007) calls “political philosophy of risk” to decide which groups should participate in the assessment of potential harms and which moral and social values should guide judgments about acceptable level of uncertainty.

2.2 Argument from pluralism

The argument from pluralism is concerned with the pluralism of epistemic values rather than with the pluralism of moral and social values. The argument tells that the value-free ideal is not attainable because moral and social values can legitimately play a role in determining which epistemic values scientists emphasize when they evaluate theories (Longino 1995, 2002). By definition, an epistemic value promotes the attainment of truth, either intrinsically or extrinsically (Steel 2010). As Daniel Steel explains, an epistemic value is intrinsic when manifesting that value constitutes an attainment of or is necessary for truth, and it is extrinsic when it promotes the attainment of truth without itself being an indicator or a requirement of truth (2010, p. 18). Given this definition, epistemic values are a diverse set of criteria and desiderata, including theoretical virtues such as empirical adequacy, internal and external consistency, and explanatory power. It follows that there may be trade-offs between epistemic values (e.g., between empirical adequacy and simplicity), such that scientists cannot maximize two values at the same time. The argument from pluralism claims that scientists are allowed to appeal to moral and social values when they decide how to weigh different epistemic values. Thus, the argument gives rise to the question of which moral and social values should guide scientists’ decision to emphasize some epistemic values at the expense of others.

2.3 Argument from normative background assumptions

According to the argument from normative background assumptions, the value-free ideal is not feasible because moral and social values can legitimately influence the choice of background assumptions, which play a role in scientists’ decisions to accept knowledge claims (Longino 1990, 2002). As Longino explains, the acceptance of scientific knowledge claims takes place in a context of background assumptions, which are needed to establish the relevance of empirical evidence to a hypothesis or a theory (1990, pp. 43–44; see also 2002, p. 127). While background assumptions may not always “encode” moral and social values, they sometimes do so (1990, p. 216). Normative background assumptions figure especially in research on value-laden phenomena, such as human behavior. In response to the Proper Roles Question, Longino argues that moral and social values can have an impact on background assumptions “without violating constitutive rules of science” (1990, p. 83).

What Alexandrova calls “mixed claims” also presuppose normative background assumptions as such claims “mix the normative and the empirical in a way that ordinary scientific claims do not” (2018, p. 422). For example, causal hypotheses about factors that either increase or decrease human well-being are typically mixed claims because they presuppose a normative conception of human well-being. In Alexandrova’s view, we need an account of objectivity that “ensures that values have undergone an appropriate social control, giving a community reasons to trust this knowledge” (2018, p. 436). She emphasizes that, “Such a control may not warrant blanket trust in a research project overall, but it would at least warrant trust in the project’s value presuppositions, at least by the community that exercised control over these values” (2018, p. 436).

The argument from normative background assumptions gives rise to the question of which moral and social values should guide choices between alternative background assumptions.

2.4 Argument from qualitative data collection

According to the argument from qualitative data collection, the value-free ideal is not feasible because moral and social values play legitimate roles in participant observation and semi- and unstructured interviews (Zahle 2018). In qualitative social scientific research, research questions are open to revisions during the process of data collection. This is in part because researchers want to have input from the participants of research, that is, human subjects they interview, observe or interact with in some other way. Julie Zahle (2018) argues that because of the open and interactive nature of qualitative data collection, moral and social values can play a role in decisions about how, if at all, to change research questions as well as in decisions about what data to collect. For this reason, the value-free ideal should not be applied to qualitative data collection in the social sciences. In her view, qualitative data collection should be guided by other ideals, such as relevance and balance (Zahle 2018, p. 149).

In this section, we have seen four ways in which moral and social values can legitimately enter into the core of scientific inquiry. They can guide choices concerning inductive risk, epistemic values, normative background assumptions, research questions and what data to collect. This is not, of course, to suggest that moral and social values cannot have harmful effects on scientific inquiry. They are harmful, for example, when they lead to dogmatism. As Elizabeth Anderson explains, we need to ensure that “value judgments do not operate to drive inquiry to a predetermined conclusion” (2004, p. 11).

The four arguments against the value-free ideal also suggest that we give up the conception of social responsibility that goes together with the value-free ideal. As Kourany argues, the value-free ideal is not an apolitical ideal of science. It is informed by a certain view of the proper relation between science and society, namely that society is best served by value-free scientific knowledge (2010, p. 57). The proponents of the value-free ideal assume that science and democracy are reconciled when scientists refrain from making any moral or social value judgments in their research and leave the task of making such judgments to citizens or democratically elected representatives (Betz 2013). Given this view of the proper relation between science and society, socially responsible science amounts to value-free science. However, Kourany (2010), like many other philosophers, reject this view on grounds that the value-free ideal is not attainable. She proposes an alternative conception of social responsibility. According to this conception, scientific research is socially responsible when scientists follow “sound” moral and social values in different stages of scientific inquiry (Kourany 2010, p. 106).

In the remaining sections, I seek answers to the question of how scientists are to receive information about “sound” moral and social values before they undertake a research project. That moral and social values can play legitimate roles in various stages of scientific research, does not necessarily undermine democratic principles. To reconcile science with democracy, we need to understand how scientists’ value judgments can be integrated with democratic principles. How do we ensure that scientists do not impose their personal value judgments on citizens? How do we ensure that scientists are not forced to respond to value perspectives they have good reasons to reject?

3 The proper values question

The Proper Values Question is the question of how scientists can identify the moral and social values that should guide their value judgments when they are engaged in scientific research. In this section, I discuss three approaches to this question, an expert-driven, a market-driven, and a government-driven approach. Each of the three approaches is an ideal type, which can be realized to some degree and mixed with other types.

3.1 Expert-driven approach

According to an expert-driven approach, scientists do not need to consult citizens or democratically elected representatives to acquire information about appropriate moral and social values for science. For example, Michael Polanyi, who thought that a democratic control of science could only be harmful to science (Jarvie 2001, p. 552), advocated this approach. Putting Polanyi’s argument aside, a contemporary version of expert-driven approach could be based on the assumption that there are universal moral and social values and scientists can have knowledge of them. Insofar as scientists conform to universal moral and social values in different stages of scientific inquiry, their research is socially responsible, and there is no need to negotiate with other parties. Given this approach, social responsibility does not require scientists to be responsive to citizens’ value perspectives, some of which may divert from the “right” universal moral and social values. For example, one candidate for universal moral and social values is the list of central human capabilities. The list includes ten values: life; bodily health; bodily integrity; sense, imagination, and thought; emotions; practical reasoning; affiliation; other species; play; and control over one’s environment (Nussbaum 2000, pp. 78–80).

It is not my intention to discuss the possibility of universal moral and social values here. My argument is rather that even if there are universal moral and social values and scientists can have knowledge of them, such values will not eliminate the need for scientists to consult citizens or their democratically elected representatives. The reason for this is that universal moral and social values make room for a diversity of conceptions of good life, and in any particular research project, scientists face the problem of choosing between alternative conceptions. For example, the list of human capabilities excludes only those alternatives that do not involve respect for human rights or concern for environment and other species. It can accommodate a pluralism of moral and social value perspectives within these constraints. Pluralism of value perspectives follows partly from the value of liberty, which urges us to respect different individual choices, and partly from the value of national sovereignty, which urges us to respect different political choices aiming to realize the central human capabilities. I maintain that scientists do not have any special expertise or authority to select values from the pool of acceptable value perspectives, and therefore, they should consult citizens. Even if a scientific community were diverse in terms of scientists’ gender, race, class background, and sexual orientation, it would not be clear why scientists, who have an interest in promoting their own research agendas, should make value judgments on behalf of the whole society.

For these reasons, I do not consider an expert-driven approach viable. I do not claim that experts should not have any say in decisions concerning moral and social values in science. My claim is rather that they should not have a disproportionate power in such decisions (see also Reiss 2019). As Turner (2006) argues, granting experts too much power undermines democratic principles in two ways. Excessive expert power poses a threat to the equality of citizens. It also threatens the ideal of neutrality, that is, the view that in liberal democratic societies, the state should aim to be neutral with respect to specific conceptions of good life and treat these conceptions even-handedly (see also Lacey 2013).

If we reject an expert-driven approach, the next step is to ask which mechanisms can be used to pool information about citizens’ value perspectives. In the remaining parts of the section, I consider both a market and a political mechanism.

3.2 Market-driven approach

According to a market-driven approach, the market is the best place to find out about the value perspectives of citizens. This approach is often associated with American Cold War doctrines concerning the proper relations between science and society. The underlying assumption is that citizens’ choices in the market reflect their actual values better than, for example, majority-voting procedures (Mirowski 2004, p. 307). Given a market-driven approach, socially responsible scientists should aim to respond to the demand in the market. Scientists may do this, for instance, by collaborating with entrepreneurs who specialize in the formation of startups, or by working for companies, which have information about consumer preferences. A market-driven approach is consistent with numerous efforts to commercialize scientific knowledge, for instance, via patenting, consultancy, and academic entrepreneurship.

The major flaw in a market-driven approach is in the assumption that the market is the best place to find information about the value perspectives of citizens. This assumption is false. The demand in the market reflects the values and the preferences of the wealthy segments of the society better than the values and the preferences of the poor because the poor has limited resources to express their values and preferences by means of consumer choices. For this reason, a market-driven approach needs to be supplemented with a political approach of some sort to acquire information about the value perspectives of all citizens in the society, including members of marginal or subordinate social groups. What exactly should the political approach be like? Next, I discuss one sort of political approach, a government-driven approach.

3.3 Government-driven approach

According to a government-driven approach, elected politicians and policymakers they have nominated are in the best position to express the value perspectives of citizens (at least in liberal democratic societies). Historically, this approach has been attractive, for example, to those science policy actors (e.g., James B. Conant), who wished to ensure that scientists contribute to national defense and security not just in times of war but also of peace (Biddle 2011). Given a government-driven approach, governments, or special committees nominated by governments, should set agendas for scientific research. They can do this by channeling funds directly to research and development projects (e.g., the Manhattan Project), or by introducing funding instruments that target specific themes and topics, which are in line with government policy.

I argue that even though a government-driven approach avoids the problem of excessive expert power and excessive commercial interests in science, it has its own shortcomings. Elected politicians and policymakers they have nominated have often merely a partial picture of the needs of the society, and they are not immune to pressures from powerful interest groups. Moreover, there is a conflict between the rhythm of politics and that of scientific research. Whereas government policies tend to change in every election period, scientific research demands stability over a longer period to deliver results. There is a danger that a government-driven approach prevents scientists from addressing complex social, environmental, and technological problems by pushing scientists towards superficial and fast research projects. In the worst scenario, an authoritarian government supersedes a liberal democratic order and abolishes the autonomy of scientific institutions.

In sum, none of the three approaches—an expert-driven, a market-driven, or a government-driven—provides a satisfactory answer to the Proper Values Question. Yet, it would be premature to give up the attempt to reconcile scientists’ moral and social value judgments with democratic principles. In the next section, I discuss two models for socially responsible science, well-ordered science and deliberative polling. The two models aim to improve on an expert-driven, a market-driven, and a government-driven approach by inviting input from a variety of stakeholders, from experts and policymakers to citizens whose lives scientific research influences, either directly or indirectly.

4 Procedural accounts of social responsibility

The two models for socially responsible science aim to solve a dilemma between public participation and informed decision-making. On the one hand, the ideal of public participation tells that scientists should consult as many citizens as practically possible to ensure that democratically endorsed moral and social values guide scientific research. On the other hand, the ideal of informed decision-making tells that citizens should have adequate information about the current state of science, the potential risks and benefits of various lines of inquiry, and the cost of scientific research to be able to make informed decisions. There is a tension between these two ideals. Large-scale participation tends to undermine informed decision-making and vice versa.

One solution to the dilemma is a mini-public that is representative of the larger public and capable of deliberation and informed decision-making. Ideally, a deliberation in a mini-public provides scientists with information about the public’s counterfactual informed democratic decisions concerning appropriate moral and social values in science. In other words, a deliberating mini-public tells scientists what decisions citizens would make if they had an opportunity to participate in a deliberation and if they had sufficient information about relevant alternatives and their benefits, costs, and risks.

It is important to notice that a deliberation in a mini-public does not have the political authority or the legitimizing force that actual democratic decisions have (Keren 2015). While political authority lies in well-functioning liberal democratic institutions, the goal of the mini-public is to provide scientists with information about the many needs, interests, and value perspectives of citizens. In this section, I argue that purely procedural accounts of socially responsible science do not succeed in meeting this goal.

4.1 Well-ordered science

According to Kitcher, the need for well-ordered science grows out of the insight that scientific inquiry aims for significant truth, and not just plain truth (2011, p. 111). As significance is morally and socially value-laden concept, the challenge is to understand how scientists’ judgments of significance are integrated with democratic principles (2011, p. 106). As Kitcher argues, scientific knowledge is rarely an “all-purpose instrument” for society, and hence, science needs democratic guidance (2011, p. 108).

Kitcher proposes the view that “science is well ordered when its specification of the problems to be pursued would be endorsed by an ideal conversation, embodying all human points of view, under conditions of mutual engagement” (2011, p. 106; see also 2001, pp. 122–123). He argues that well-ordered science is a better option than science subjected to actual democratic decisions because the great majority of citizens do not have sufficient understanding of science to be able to make informed decisions (2011, p. 113; see also 2001, p. 117). A majority vote is likely to lead to “the tyranny of ignorance” (2011, p. 118).

In well-ordered science, scientists’ moral and social value judgments conform to the democratic decisions of ideal deliberators who are representatives of different social groups (Kitcher 2001, p. 123). Ideal deliberators are “tutored” by “disinterested” experts (2001, p. 120; see also 2011, p. 114), and in case of expert disagreement, they have a chance to hear both sides of the controversy (2011, p. 115). Besides being familiar with the current state of scientific research and “the atlas of scientific significance” (2011, p. 127), deliberators are sensitive to one another’s needs (2011, p. 129; see also 2001, p. 119). Kitcher even suggests that, “The atlas of scientific significance could be supplemented with an index of human needs” (2011, p. 129). Moreover, the group of deliberators could include representatives for those who cannot represent themselves, such as future generations, animals, children, and severely disabled people (2001, p. 126; see also 2011, p. 116).

According to Kitcher, a deliberation can end in two ways. The deliberators may reach an agreement on an ordered list of research agendas, or in case they do not reach an agreement, they may take a vote on some alternatives (2001, p. 119). In well-ordered science, the deliberators do not have the authority to make decisions about research funding; instead, their role is to inform funding agencies, universities, and research institutions about the moral and social values that should guide such decisions (2001, p. 122). Funding agencies, universities, and research institutions can use this information to set opportunities and incentives to certain research agendas. Similarly, scientists can use the information to make sound moral and social value judgments in their research. Thus, well-ordered science does not impose strict constraints on the freedom of scientific inquiry (understood as freedom constrained by research ethics).

Kitcher’s well-ordered science strikes many philosophers as a highly idealized procedure of pooling information about citizens’ value perspectives. As Kitcher himself stresses, his intention is to describe an ideal at which scientific institutions should aim (2001, p. 123; see also 2011, p. 125). He says that, “any actual conversation of this type is impossible” (2011, p. 115). Yet, he suggests that scientific institutions can move towards the ideal by identifying those aspects of present-day science that depart in striking ways from well-ordered science. He identifies four aspects. First, contemporary science is constrained by historical contingencies that no longer reflect human needs (2011, pp. 125–126). Second, politicians, interest groups, and social movements practice democratic control of science in an adversarial rather than deliberative way (2011, p. 126). Third, the privatization of scientific research moves some areas of science away from democratic control of research agendas (2011, p. 126). Fourth, current scientific research neglects the interests of a vast number of people (2011, p. 127). As the fourth aspect indicates, Kitcher is concerned with the problem that scientific agenda-setting does not take into account the needs of women, children, members of minorities, and people in developing countries (2001, p. 127).

Given that Kitcher intends well-ordered science to be inclusive of the value perspectives of marginal or subordinate social groups, it is fair to ask how well-ordered science should be developed or refined so that it accomplishes this task. The ideal of well-ordered science raises many questions, such as how deliberators and experts are chosen, and how their interactions are structured (Leuschner 2012, p. 194). According to Kitcher, it is a matter of empirical inquiry to find out which procedures and structures best approximate well-ordered science (2011, p. 225). He suggests that James Fishkin’s (2009) deliberative polling is one way to run an experiment on such procedures and structures (2011, p. 225). In the next section, I discuss deliberative polling as a model for socially responsible science.

4.2 Deliberative polling

While Kitcher’s well-ordered science is an ideal that scientific institutions cannot fully realize, deliberative polling is a procedure that can be implemented in practice. Another difference between well-ordered science and deliberative polling concerns the question of what representation means. While Kitcher does not specify in what sense deliberators “represent” different groups in the society, deliberative polling requires that deliberators be a random sample selected from a larger population. According to Fishkin, random sampling is one way to realize the democratic ideal of equality even though randomly selected deliberators are not “representatives” in the same way as democratically elected persons are representatives of their constituents (2009, p. 39).

Like Kitcher, Fishkin (2009) is concerned with the inclusiveness of deliberating mini-publics. He is worried about the “internal exclusion” of marginal or subordinate social groups, which occurs when “some people may be in the room but without having their views taken seriously” (2009, p. 160). As a remedy to the problem of “internal exclusion,” Fishkin proposes oversampling marginal or subordinate social groups (e.g., the Aboriginals in Australia) to ensure that minority representatives are heard by representatives from larger social groups (2009, p. 162). While both Kitcher and Fishkin are concerned with inclusiveness, their approaches diverge at a crucial point. By emphasizing the ideal nature of deliberation, Kitcher wishes to include not merely the viewpoints of marginal or subordinate social groups but also the viewpoints of those who cannot represent themselves (e.g., future generations, animals, children, and severely disabled people).

Deliberative polling occurs when a random sample of people selected from relevant social groups participate in a small group discussion with a moderator. To facilitate deliberation and informed decision-making, experts on various topics relevant to the policy issue at hand will brief the group members. Like in well-ordered science, a deliberative process may culminate in an agreement or a majority vote. Ideally, the outcome reflects the counterfactual informed democratic decision of relevant social groups, that is, the decision they would make if they had an opportunity to deliberate and to make an informed democratic decision.

Like Kitcher, Alexandrova (2018) proposes deliberative polling as a model for socially responsible science. She argues that deliberative polling is appropriate when there is a disagreement over the question of which moral and social values should guide scientific research. To ensure that values have undergone an appropriate social control, scientists should follow three rules. The first rule tells scientists to “unearth the value presuppositions in methods and measures” (2018, p. 437). The second rule advises scientists to “check if value presuppositions are invariant to disagreements” (2018, p. 438). In case value judgments are invariant to several moral and political outlooks, scientific research is socially responsible, and no further steps are necessary. However, in case value judgments vary significantly across different moral and political outlooks, scientists should proceed to the third rule that recommends deliberative polling. As Alexandrova explains, “When the choice of a measure of well-being is a choice between conflicting sets of values, the only way to practice trustworthy science is to make this choice in a deliberative public setting that includes the relevant parties” (2018, p. 439).

The crucial question in deliberative polling is who the relevant parties are. The answer to this question depends on the subject matter of inquiry. In the case of human well-being, Alexandrova (2018) argues, representatives from three groups of people should participate in deliberations. One group consists of scientists and scholars who study human well-being, another group of the policy users of scientific research, and yet another group of citizens who are likely to be affected when research results are put into practice through policies, therapies, and other interventions (2018, p. 440).

Thus far, we have seen two models aiming to integrate scientists’ moral and social value judgements with counterfactual informed democratic decisions. The two models have advantages over expert-driven, market-driven, and government-driven approaches. Unlike expert-driven and market-driven approaches, they avoid the problem of excessive expert power and excessive commercial interests, both of which undermine the democratic principles of equality and neutrality. Unlike government-driven approaches, they invite input directly from citizens whose lives scientific research is likely influence. They also grant a reasonable degree of autonomy to scientists by emphasizing that a deliberation in a mini-public is a means to inform scientists about citizens’ value perspectives, and not a means to make funding decisions. Despite these advantages, I wish to draw attention to their limitations.

4.3 Criticism

I argue that as tools of information processing, the two models for socially responsible science are only as inclusive as the pool of alternative value perspectives they engage. The two models do not offer any means to extend this pool. Yet, the pool of alternative value perspectives is limited due to relations of power that prevent some social groups from communicating their social experiences and value perspectives to others.

Let me explain how relations of power can prevent people from communicating their social experiences and value perspectives to others. For example, relations of power can intimidate potential testifiers. Some people are not willing to share their social experiences and value perspectives because they are afraid of the consequences. Besides invoking fear, relations of power can suppress testimonies by giving rise to painful emotions such as shame. A sense of shame may prompt a potential testifier to hide or cover up her social experiences, thereby silencing her. Relations of power can suppress or distort testimonies also by means of “testimonial smothering” (Dotson 2011). As Kristie Dotson defines it, testimonial smothering means, “the truncating of one’s own testimony in order to insure that the testimony contains only content for which one’s audience demonstrates testimonial competence” (2011, p. 244). Moreover, relations of power can suppress or distort testimonies by inflicting “hermeneutical injustice” on potential testifiers (Fricker 2007). Hermeneutical injustice is the injustice of having one’s social experience obscured from collective understanding owing to a lacuna in collective hermeneutical resources (Fricker 2007, p. 158). Insofar as social experiences and value perspectives remain private, they cannot enter into public deliberation. Thus, relations of power can limit not only who participates in a mini-public but also the pool of alternative value perspectives that deliberators bring into the mini-public, thereby undermining the democratic values of equality and inclusiveness.

More could be said about the ways in which relations of power hamper communication. My brief review is meant to show that relations of power are potentially a problem for well-ordered science and deliberative polling. Inclusion of marginal or subordinate social groups does not automatically lead to inclusion of their social experiences and value perspectives. The former inclusion is merely a step towards the latter inclusion. This is why random sampling—or oversampling marginal or subordinate social groups—is not enough to guarantee the diversity of social experiences. Nor is it sufficient to appeal to the index of human needs, as Kitcher does (2011, p. 129). Even if there were such an index, relations of power could prevent deliberators from seeing that the needs of some social groups are not satisfied. The index would have to be supplemented with a process that helps scientists understand why the needs of some social groups are not satisfied and how they could be met.

As James Bohman (2006) argues, the success of deliberation depends on the diversity of social experiences participants bring into the deliberation. This is because social experiences embody important information about the society. Social experiences are social in two senses: they arise in particular social locations and they are shared with other people inhabiting similar social locations. Social experiences may involve also local knowledge, that is, knowledge about particular cultural, economic or social practices and their circumstances. Given this understanding of social experience, important information about society is distributed across different social locations. The more complex societies are in terms of the division of labor and the more unequal they are in terms of economic resources and opportunities, education and health, the more radically different the social experiences of the citizens are likely to be. Moreover, the more multicultural societies are in terms of ethnic identities and languages, the more likely it is that the social experiences of the citizens diverge.

To summarize, as tools of information pooling, well-ordered science and deliberative polling are limited by the pool of value perspectives that have already been made public. This is because they are purely procedural accounts of social responsibility. They define a procedure that would ideally tell scientists which moral and social values citizens would prefer if they made informed decisions. In order to understand how the existing pool of value perspectives could be inclusive of the value perspectives of subordinate and marginal social groups, we need to supplement procedural accounts of socially responsible science with other strategies.

In the next section, I propose one strategy to broaden the pool of value perspectives. The strategy advises us to take seriously the view that relations of power can prevent people from communicating their social experiences and value perspectives to others. The task is to understand how we can challenge relations of power and the barriers they raise for communication.

5 Scientific/intellectual movements

I argue that scientific/intellectual movements (SIMs) can play an important role in increasing the social responsibility of scientific research. As Scott Frickel and Neil Gross define them, “SIMs are collective efforts to pursue research programs or projects for thought in the face of resistance from others in the scientific or intellectual community” (2005, p. 206). Social sciences are especially apt for the creation of SIMs because they include “advocacy scholarship,” that is, research that aims to articulate the standpoint of a social group in a way that is intelligible and instructive to the members of the group (Turner 2009, p. 175). Advocacy scholarship involves often collaboration between social scientists and members of the social group. A SIM is an apt ally for advocacy scholarship since it facilitates a collaboration whereby social scientists invite some members of the social group to contribute to the research design or to participate in the research process in some other way. While advocacy scholarship is hardly disinterested, its legitimacy lies in the view that it serves the purpose of improving democratic discussions (2009, p. 176). As Turner explains, “The political meaning of sociology is thus to contribute to the diversity of political discussion by helping to give voice and support to particular movements and groups” (2009, p. 177). Feminist social science is an example of a SIM that aims to make visible the social experiences and value perspectives of marginal or subordinate social groups. It has been successful in pursuing a program for scientific change (e.g., correcting gender biases and creating new research topics out of women’s experiences) and challenging normative views about scientific research (e.g., the ideal of value-free science) (Crasnow 2007).

I argue that—unlike experts, markets, governments, and deliberative mini-publics—SIMs have the capacity to extend the pool of alternative value perspectives. Those social scientists who create or take part in SIMs are often moved by dissatisfaction with the value relevance of research, or lack of it. They create a SIM in order to develop a general value orientation (e.g., gender equality) into a concrete research project (e.g., a research project on violence against women and domestic violence). Without such work, it is difficult to understand what a general value orientation, such as gender equality, means in terms of research problems, theoretical concepts, methods, and data collection. Most importantly, it is difficult to know what it takes to pursue a moral and social goal, such as equality, unless one understands how actual inequalities are generated and maintained. By converting a general value orientation into a research project or program, a SIM can communicate otherwise invisible social experiences and value perspectives to the larger scientific community and the society as a whole.

Moreover, a SIM can do this despite—or against—relations of power, which tend to suppress or distort testimonies. Due to its collective nature, a SIM can do at least four things that individual social scientists cannot do on their own. First, a SIM can empower potential testifiers by shifting the balance of power relations (Rolin 2016b, p. 17). Empowerment is the ability of an individual or a group to act in spite of or in response to the power wielded over the individual or the group by others. For example, when social scientists encounter the difficulty of gathering interview data against the forces of fear and shame, they can move from the task of articulating individual social experiences to the task of articulating shared social experiences. For many informants, the realization that their social experiences are widely shared may be a step towards empowerment. Second, a SIM can empower testifiers by transforming their self-perception (Rolin 2016b, p. 17). It can provide both researchers and the participants of research with novel opportunities for interaction and self-perception, and these changes may lead to novel ways of speaking. Third, a SIM can empower testifiers by providing them with a sense of moral and political justification for speaking in novel ways (2016b, p. 17). A SIM can restore hermeneutical justice by conceptualizing and theorizing social problems that have lacked a widely recognized name (Koskinen and Rolin 2019). Fourth, a SIM can provide inquirers with a novel epistemic community (Rolin 2016b, pp. 17–18). In such a community, social scientists can receive fruitful criticism for research, which the larger scientific community has ignored. A potential obstacle to any scientific research is that its development is hindered by a lack of constructive criticism. It is met either with a silence or with an unnecessarily harsh criticism, which is unlikely to be helpful for researchers. Under such circumstances, a SIM can be epistemically empowering by providing researchers with an epistemic community where they can receive valuable criticism and uptake for their ideas.

In sum, due to its collective nature, a SIM can make social experiences and value perspectives visible under conditions where relations of power tend to suppress or distort testimonies. It is not my intention to suggest that all social scientists should take part in a SIM aiming to do justice to the social experiences of marginal or subordinate social groups. My proposal is rather that at least some social scientists should do so because it would increase the social responsibility of research. I do not suggest either that a SIM will ensure that all social experiences and value perspective have a hearing in scientific communities. Some social experiences and value perspectives may go unnoticed even for a SIM that is committed to valorizing marginal or subordinate viewpoints.

In this section, I have criticized purely procedural accounts of socially responsible science by arguing that no single procedure can guarantee that mini-publics are inclusive of the social experiences and value perspectives of subordinate or marginal social groups. Building a SIM is a strategy to bring marginal or subordinate social experiences and value perspectives into the sphere of public deliberation. The term “strategy” is meant to suggest here that it is not a straightforward procedure that scientific communities can implement. As SIMs are collective intellectual movements aiming for scientific change, creating a SIM is akin to building a social movement in the larger society.

6 Conclusions

I started my inquiry into scientific objectivity by examining the idea that objectivity gives us a permission to trust scientific knowledge claims, and I arrived at the conclusion that SIMs can play an important role in increasing trust in social research by promoting socially responsible science. Let me rehearse my steps.

I have argued that when we find ourselves in a relation of epistemic dependence, trust in scientific knowledge claims involves reliance on the knowledge claims and trust in scientists who present the claims. According to a widespread view, trustworthiness requires expertise, honesty, and social responsibility. Given this view, objectivity turns out to be a hybrid concept with both an epistemic and a moral-political dimension. The epistemic dimension tells us when scientific knowledge claims are reliable and the moral-political dimension when we can trust scientists to be socially responsible, that is, to follow “sound” moral and social values in different stages of scientific inquiry (Kourany 2010, p. 106). While few philosophers would deny that social responsibility is an important component of trustworthiness, their views diverge on the question of how scientists can receive information about appropriate moral and social values. I have called this the Proper Values Question.

In response to the Proper Values Question, I have argued that procedural accounts of social responsibility, such as well-ordered science and deliberative polling, have advantages over expert, market, and government-driven approaches. Yet, these two accounts of social responsibility are limited because they can process information merely from a pool of value perspectives that have already found their way to the public sphere. To overcome this limitation, we need to consider strategies that could extend the existing pool of value perspectives.

I have proposed one such strategy. The strategy involves SIMs that are in a position to challenge relations of power. I have argued that SIMs have the capacity to extend the pool of value perspectives by giving voice to marginal or subordinate social groups. By making the pool of value perspectives more inclusive, SIMs provide scientific communities with better information about citizens’ moral and social value perspectives and respective research projects. To observe such perspectives in scientific research is to be socially responsible, and to be socially responsible is one aspect in the more complex notion of being a trustworthy source of information. Insofar as scientific objectivity is about authorizing trust in scientific knowledge claims, social responsibility is one dimension of objectivity.