Naturalized Knowledge‐First and the Epistemology of Groups

This paper commences by making a case for a naturalized approach to knowledge‐first epistemology. On this basis it then goes on to describe and defend a naturalized, functionalist account of group knowledge. It then contrasts this with Jennifer Lackey’s (2021) account of the episte‐ mological status of groups.


Introduction
The knowledge-first approach to epistemology has been pursued hitherto almost entirely with a focus on the epistemology of individual subjects.I argue here that knowledge-first epistemology has something distinctive to say about the epistemology of groups.Discussion of the epistemological status of groups has always been centred on the details of how a group's epistemic and doxastic states are constructed out of the mental states of the individuals in the group.Accounts of group mental states are thus typically reductive, or at the very least take group states to supervene on the states of individuals.
In this paper I first argue that we should see knowledge-first epistemology as an instance of naturalized epistemology.This leads to a distinctive functionalist account of what kind of state knowledge is.This in turn allows for a radically anti-reductive, top-down, functionalist account of group knowledge.A group can know in virtue of realising the right kind of functional structure.That conclusion does not immediately tell us what mental states the individual members of the group have to be in, but does put constraints on how they should be organised.
In her recent book, Jennifer Lackey (2021) provides a very different account of group knowledge-one that is reductive and belief-first.I contrast her account with this knowledge-first, anti-reductive account and argue that there are reasons to prefer the latter.
2 Knowledge-first epistemology naturalized I argue that knowledge-first epistemology should best be understood from the perspective of naturalized epistemology.The central claim of knowledge-first epistemology is that the following view of knowing is wrong: Belief-first epistemology: The mental state kind of central interest to epistemologists is belief.States of knowing are just a subset of the set of states of believing.To know is to be in a state of belief while meeting certain other conditions X that include being true.(It is a key task of epistemology to uncover what those other conditions are.) and that the following is correct: Knowledge-first epistemology: The mental state kind of central interest to epistemologists is knowing.Knowing is a distinct mental state kind, different from the state of belief.(It is a key task of epistemology to show how the other subject matters of epistemology and related fields-e.g.belief, evidence, justification, and assertion-relate to knowing.)Williamson (1995Williamson ( , 2000) ) develops the central claim by noting that knowing and certain other mental states such as perceiving and remembering are factive: one can perceive that p only if it is true that p. Furthermore, these other states entail knowing: if one perceives that p one knows that p.So knowing can be characterised as the most general factive mental state: the factive mental state that one is in when one is in any factive mental state.
In articulating the central claim I have used the term 'kind', although Williamson does not.I shall explain why I do.Williamson puts some effort into arguing that the proposition that knowing is a mental state is an interesting and indeed contentious proposition.He says that while believing is a mental state, believing truly is not.And neither is knowing, according to the belief-first view, since that takes knowing to be a matter of believing truly plus some other condition (X ′ i.e.X minus the truth component).On that view, 'knowing is a metaphysical hybrid, a mixture of mental states together with mind-independent conditions on the external world'.
But one might resist concluding that believing truly is not a mental state (ditto for believing truly plus X ′ or believing plus X).The term 'mixture' implies something like the composition of entities.The composition of a P and a Q might well not be an P (though it might be)-for example the fusion or mixture of a turkey and a trout is not a turkey.So the composition of a mental state and a non-mental state should not be taken to be a mental state.So we cannot conclude that the composition of a mental state and a non-mental condition is itself a mental state.That seems to be the argument implicit in what Williamson says.Yet a condition such as the truth of a proposition is not entity-like at all.So it is unclear why the 'mixture' of a mental state and such a condition is not itself a mental state.Consider the the mixture of an animal, a turkey, and a non-animal condition, being owned by me.My pet turkey, Tamas, satisfies this fusion.And Tamas is still an animal.When we consider a mixture of some type T and a condition X on tokens of that type, the tokens still are members of that type: instances of T∩X are still Ts.Someone who is in the state of believing truly or believing X-ly is still in a mental state, the state of believing.
The best and clearest way to understand what I take Williamson to be driving at is to think in terms of kinds.The notion of kind tends to be taken in a sparse way: willow warblers are a kind of bird, but the willow warblers that have visited my garden do not constitute a kind of bird; SUVs are a kind of car, but SUVs whose owners were born on a Tuesday are not a kind.While one could use a looser, abundant notion of kind, when we talk of natural and artifactual kinds, we typically use 'kind' in the sparse way just illustrated.
So I suggest that the right way to interpret the central knowledge-first claim, 'knowing is a mental state', is as asserting that knowing is a kind of mental state.In this respect knowing is like belief, which is also a kind of mental state.But beliefs that are true (and meet condition X ′ ) do not constitute a kind any more than willow warblers that have visited my garden form a kind.
It is highly plausible that the kinds in question are natural kinds.The mental states of knowing and believing, along with other mental states, are studied by the natural sciences, psychology and cognitive science in particu-lar.Furthermore, some of these states we attribute also to some non-human animals.
Indeed we can study the evolution of the cognitive capacities of humans and other animals that issue in belief and knowledge.Not only does this provide confirmation for the proposal that knowing and believing are natural kinds of mental state, it also provides support for the knowledge-first view over the belief-first view.It is undeniable that we have evolved cognitive capacities that function to make sure that our relevant mental states are true.An intuitively plausible belief-first story says that we have a capacity for believing and the beliefs that we have combine with desire and similar conative states to produce behaviour.Cognitive capacities evolve to ensure that the beliefs that this capacity entertains are true or likely to be true, for true beliefs are more likely than false beliefs to lead to behaviours that are successful in achieving fulfilment of our evolutionarily advantageous desires (and evolution ensures that we tend to have strong fitness enhancing desires-for nutrition, to secure a mate, to avoid predators, and so on).
This story provides a plausible belief-first account of the evolution of cognitive capacities: we have a capacity for forming beliefs, and in addition we have evolved distinct capacities for ensuring that the beliefs so formed are true, because such capacities are fitness-enhancing.The problem with this story it that it fails to explain the evolutionary origin of the capacity for belief itself.For what would explain the evolution of the capacity of belief independently of the cognitive capacities?Believing per se provides no evolutionary benefit.It is only believing truly that is beneficial.So there is no reason to suppose that a capacity would evolve whose function is believing per se.
A much more plausible story is that the evolving cognitive capacities are themselves behaviour-influencing, factive representational capacities, at first highly primitive, later more sophisticated.There is no distinction between these representational capacities and the mechanisms for ensuring that these representations are true.
Take, for example, the evolution of vision.The first step was the development of photoreceptors in unicellular organisms, such as the Euglenophyceae algae.In organisms employing photosynthesis, stimulation of the photoreceptors would cause the organism to move towards the light using a flagellum for locomotion.The next development was a cup-shaped depression in the outer membrane of the organism, lined with photoreceptors (an 'eyespot').
The cup shape meant that the stimulation of the photoreceptors would not be uniform-if a cup is illuminated from the right, then the left of the interior of the cup will be more brightly illuminated than the right.This allows the organism to respond to the direction of the light-source-it will be able to move more precisely towards that source than those equipped only with simple, non-directional photoreceptors.
Consider a particular individual organism with a cup eyespot, which is illuminated by a light-source on the right.The asymmetric pattern of stimulation of the eyespot causes the organism to move towards the right.According to two prominent theories of representation, the organism represents the light-source as being on the right.This is because there is a law-like correlation between the light-source being on the right and the state, S, of the organism (the asymmetric pattern of eyespot stimulation in particular).That correlation is not itself sufficient for representation.For there is also a correlation between a light-source being on the left of the organism and reflected in a mirror and the very same state S. According to teleological theories of representation (Millikan 1984), S represents the light source being on the right, because it is light-sources being on the right (and not light-sources being on the left and reflected in a mirror) that are responsible for the evolution of the organism's ability to be in the S kind of state.Asymmetric dependence theories (Fodor 1990) similarly claim that this is a representation of the light source being on the right because, although light-sources on the left and reflected in a mirror also cause S, the latter correlation exists only because the former one exists, not the other way around.
So these theories of representation tell us that the organism is in a kind of state, the S kind, whose nature is to represent that p when it is the case that p.For the very existence of the S kind of state is a consequence of its ability to represent that p.If the conditions for representation had not typically held in the evolutionary past, then the organism would not have evolved the capacity to be in state S.So S is a kind of state whose nature is to be a factive representational state.Thus the evolution of cognitive capacities means that those capacities will typically issue in factive representational kinds of state.
This goes a long way towards vindicating the naturalistic claim on behalf of knowledge-first epistemology.For we have seen that evolution produces kinds of representational state that are factive by nature.It does not produce a capacity for representation and a distinct capacity for making those represen-tations true.We have not quite fully vindicated (naturalized) knowledge-first epistemology, since the factive representational state kinds are not yet mental states.Plausibly the attribution of mental states to animals requires them to have more sophistication than our single-celled organism.Nonetheless, the argument remains the same.For the very same reasons, as vision evolves to be more complex, and as other forms of cognition evolve, including other senses, memory, and reason, and as systems evolve to integrate these, these capacities will be capacities that function to produce factive representational states.When the degree or kind of sophistication reaches a level sufficient for the attribution of 'mental', then these will be factive mental (representational) states.
What is the degree or kind of sophistication that is sufficient?It won't be necessary for what follows to give an accurate answer to that question.Nonetheless, I shall sketch a possible answer that at least shows the sort of issue that is relevant.Mental states differ from the representational states just considered in several related respects.The simple representational states just considered are more-or-less directly connected with behaviour-they start evolving as stimulus-response systems.Christmas is only a month away.I had better increase the quantity of grain I feed to Tamas'.In that reasoning it is not my visual representation of Tamas that itself plays a role.My thought 'Tamas is skinny' is not that visual representation, but is a mental state with some of the same content derived from the visual representation.The preceding discussion concluded that organisms have evolved capacities for producing factive mental states.Let us call a sub-organismal structure a '(mental) cognitive system' if it just such a capacity with the features just described, viz. it has the function of producing factive representational states that are disposed (i) in conjunction with other mental states to produce behaviour (in some appropriate way), (ii) to be the inputs into processes of reasoning (practical or theoretical), or (iii) to transfer those representations to memory.Some organisms (including you and me) possess cognitive systems.When those cognitive systems are functioning properly, they produce in us factive mental states.
We should understand the forgoing as saying the following about knowledge.Evolutionary theory and what we observe in the evolution of specific cognitive capacities both show that we did not evolve a capacity for belief independently of those cognitive capacities.Rather, those cognitive capacities evolved with the function of producing factive states.In sophisticated animals with mentality, these are factive mental states, states produced by a properly functioning (mental) cognitive system.S is in a state of knowing when S is in a state that is the product of a properly functioning (mental) cognitive system.(The role of 'mental' in parentheses is to indicate that the cognitive structure in question has the sophistication described in the preceding paragraph-henceforth that shall be taken as given.) The naturalistic approach just articulated is functionalist in a strong sense.Standard (e.g.Turing Machine) functionalism regards mental state types as dispositions whose stimuli are other mental states and other causal factors such as external stimuli and whose manifestations are other mental states and/or behaviour.The approach taken here takes mental states not to be mere dispositions, but functions in a biological sense: the heart is disposed to make a faint beating sound, but its function is to pump blood.As Elliott Sober (1990) points out, functionalism looks very different (and better able to resist criticism) when one puts the 'function back into functionalism'.Imagine that we were to come across some hitherto unknown, perhaps alien life-form.How would we decide whether it was appropriate to ascribe states of knowing to individuals of this kind?We should try to discover whether the life-form possesses cognitive structures that function (in this strong sense) to produce factive states.If together these cognitive structures form a cognitive system with the right kind of sophistication, then the organisms in question are capable of forming factive mental states, i.e. they can know.I shall call this stronger version of functionalism, as advocated by Sober, 'naturalized functionalism'.Note that parallel points can be made with respect to artifactual functions: my car is disposed to make an alarming rattling sound above 70mph, but its function is to convey people from place to place.Naturalized functionalism should embrace these artifactual functions also.We can regard these artifactual functions as functions identified by social science, hence it is still naturalism.It is not necessary here to give a detailed and fully general account of function that is strong in the sense discussed and which encompasses both biological and artifactual functions.It is sufficient to point out that functional properties are ones that have been selected for, where selection can be natural selection or selection by design (Kitcher 1993).
This section has argued that the best way to understand and support the claims of knowledge-first epistemology is via naturalized functionalism.What we know of evolution endorses knowledge-first rather than belief-first epistemology.Knowing is a natural kind of mental state-and not a subclass of some other mental state (belief)-because animals have evolved unitary cognitive capacities for producing factive mental states.In the next section I shall use this conclusion to develop an approach to understanding the epistemic status of groups.

Groups that can know
An increasingly important strand of social epistemology investigates what it is for a group to know.In this context the term 'group' has tended to be used inclusively to encompass a wide range of social entities that have individuals as (in some sense) parts, from institutions to scientific communities to courts to groups of friends.Bird (2014) promotes what he calls the 'analogy' approach to thinking about groups states: a group is in a state X when it is in a state that is analogous to the human state X.But he does not spell out the conditions of analogy in any detail, save to say that 'analogy' is intended in a strong sense: the two analogues are to be thought of as instances of the same more general type.In this section I show that the naturalized, functionalist knowledge-first of the preceding section allows us to add the required detail to Bird's analogy approach.In short, we have seen how organisms have evolved cognitive systems, systems whose function is to produce factive mental states.Functionalism allows us to extend what counts as a cognitive system from organisms to other entities, including groups.Any entity, in-cluding a group, will be in a state of knowing if it has a properly functioning cognitive system.A cognitive system is defined a system whose function (in the stronger sense) is to produce factive representational states.I will call this the cognitive system approach to group knowledge.
Two key features of standard functionalism will be particularly relevant in what follows (i) relations between states and between states and behaviour, and (ii) multiple realizability.
Functionalism tells us that the nature of a mental state is given by its relationship to other mental states (as causes and effects) and to behaviour.So the nature of belief is paradigmatically explained by its being brought about, e.g., by sensory experience and by its being disposed to cause behaviour (in conjunction with desire), as well as changes in other mental states (a new belief will cause other beliefs to change).While naturalized functionalism applied to knowledge-first might have additional things to say about belief (see below), the general picture remains.States of knowing are brought about by the stimulation of particular cognitive capacities (from vision to reasoning), whose function is to bring about factive representational states.States of knowing are disposed to bring about behaviour, for it is is by bringing about behaviour that is likely to be successful that a cognitive system is adaptive.The relationship to behaviour might be direct or it might be indirect.Via reasoning, states of knowledge may bring about further such states which themselves will cause behaviour.They are also disposed to cause the faculty of memory to store what it known, so that facts that come to be known now may influence behaviour in the future.So we should expect a social group that knows not simply to be in a factive representational state, but that state to be linked, dispositionally, to group action, to a group analogue of reasoning, possibly some analogue of memory, and so on.
Functionalism also tells us that mental states are multiply realizable.The physical character of a Martian brain might be very different from that of a human brain, but both may be in the same state (e.g. of desiring that p) if those states both occupy the same functional role.Functionalism allows us to contemplate the possibility that a sufficiently sophisticated robot might also be in that same mental state, in virtue of its silicon and copper structure occupying that very same role.The key proposal of this paper is that this naturally extends to complex social structures.If a social structure fulfils that role also, it too can be in the that mental state.
While multiple realizability is a strength of functionalism, it has also been claimed to be a weakness.Block's (1978) 'China brain' objection to functionalism considers the possibility that the very large population of China could be organised in such as way that its information flow and coordinated activity might match that of a human brain in functional respects.Functionalism is committed to allowing the the population of China thereby would, collectively, be having thoughts and other mental states.Block takes this to be absurd.
What Block regards as a bug in functionalism, I hold to be a feature.It neatly explains how it is possible for groups to have genuine 'mental' states.To borrow Bird's expression, the analogy between human state and group state is not a 'mere' analogy, it is the more significant analogy that any realization of a functional state type bears to any other realisation of that type: analogy here is functional isomorphism.
The stronger, naturalized functionalism that underlies the cognitive system approach adds some nuance to and constraints on standard functionalism.Imagine that by some extraordinary coincidence the people of China found themselves in a set of relations to one another that happen to realize a relevant functional state.Call that case Accident.Now imagine a contrasting case where President Xi organises the people to be in that state in order to carry out that very function.Call that case Design.Naturalized functionalism should deny that Accident instantiates a mental state, whereas Design does instantiate a mental state.For the functional state in Accident is merely a disposition not a function in the strong sense.Whereas the coordination of the Chinese people, structured by the Chinese Communist Party, in Design is an artifactual function.
It is important to note that the multiple realizability of functionalism (both standard and naturalized) is not simply a question of the matter that realizes the function, but also the manner in which the function is realized.An insect's compound eye and a mammalian eye are two structurally very different ways of achieving the same functional goal, vision.They are both eyes nonetheless.So the naturalized functionalism regarding group knowledge that is proposed here tells us little about how a group should be organized to be a state of knowledge.It is compatible with there being a variety of ways in which a group is organised in order to know.
In contrast, accounts of group belief and knowledge have hitherto focussed on the individuals in a group and how they should be organised or on what conditions they should meet for it to be true that the group has a belief or knowledge.The model of group belief presented by Margaret Gilbert (1987;2013) involves a group collectively committing to let the relevant proposition stand as the view of the group.Her discussion makes it clear that in good cases-those that lead to justified group belief or knowledgesuch commitments are the outcomes of collective discussion and deliberation.Bird presents a very different model of the structure of group belief and knowledge when considering the possession of scientific knowledge by scientific teams and communities.Drawing on Hutchins's (1995a;1995b) discussion of 'distributed cognition', Bird argues that such cases are characterised by the division of labour: different cognitive tasks are distributed among the group (e.g.community) and in virtue of the accessibility to the whole group of knowledge generated by individuals or sub-groups, the group as a whole is in a group state of knowing.
The naturalized functionalist approach to group knowledge can acknowledge that both models are correct models of certain important instances of group knowledge.In both cases it is possible for groups so organised to constitute cognitive systems.The different organisations might be appropriate for different cognitive tasks.The model of a committee whose members all receive the same information and then collectively discuss the propositions in question to reach a shared conclusion is appropriate when the evidential relation between the information and propositions is not straightforward and it is not easy for an individual to draw the right conclusion.Multiple minds looking at the same information may reduce the probability of drawing an erroneous conclusion.Students of organisation science look at how the constitution of such groups can improve their decision-making reliability (e.g.concluding that this is aided by diversity and inclusion-Rock and Grant 2016).Such cases and cases where the 'wisdom of crowds' is operative are cases where Gilbert's model describes a cognitive system in the sense used above: the system operates with the function of producing factive representational states (it aims to get to the truth about the matters it discusses; its design is intended to facilitate this) and those states can be used as the causes of behaviour (such as a new policy, decision to invest etc.), or the inputs into further processes of reasoning (a recommendation is made to a higher committee), or transferred to 'memory' (an investigations conclusions are recorded in a report).
But for some purposes we might not think that committee-with-shareddeliberation is the best form of organisation.Imagine that the following is true: (i) a large amount of data needs to be gathered and analysed in a short period of time, (ii) much of that data is not easily analysed by non-specialists, (iii) the analysis of the data by a relevant specialist is not contentious-all specialists would agree on the correct analysis, (iv) the group in question contains specialists for all the areas of interest, but there is no-one who is a specialist for all areas.In such a case a natural way to organise the cognitive work is to allocate different data gathering and analysis tasks to the different specialists, and then to have a system of transmitting the results to those other members of the team who need them.Under such conditions the distributed cognition model of Hutchins and Bird will describe a cognitive system.Hutchins describes how the cognitive practices of a warship (in navigating the ship) and of the cockpit of a commercial airliner (in coordinating speed, altitude, and flap configuration when landing).Bird proposes that scientific groups and communities are also cases.All these instances are characterised by the division of cognitive labour with contributions to the overall cognitive task made by specialists undertaking distinct subtasks.These and the preceding committee-style cases are all cognitive systems because at the coarse-grained level they are functionally the same-organised with the function of producing true representations.They differ at the fine-grained level in the manner in which this function is achieved (just as insect and mammalian eyes differ).
We have just seem naturalized functionalism says that a group (or other entity) can know if it possesses a cognitive system. 1 Different models of group knowledge can be accommodated so long as they are models of a cognitive system.But this also means that those models are mistaken insofar as they are presented as accounts of what group knowledge is (as opposed to examples of how group knowledge can be achieved).We will now turn to those accounts that purport to tell us what the nature of group belief and knowledge are in order to see how this present approach differs and why it is preferable.
These other approaches to group knowledge and belief tend to focus on the individual as a knower/believer (or agent), and ask how it is that combining individuals and their states can preserve what is true of the individuals at the group level.The simplest view is what Lackey (2021: 21) calls the 'conservative' summative account: (CSA): A group G believes that p if and only if all or most of the members of G believe that p.
Even if not summative, such approaches tend to take summativism as their starting point.For example, as we shall see shortly, Lackey (2021) regards summativism as close to being correct, but needs a further condition to exclude cases were the proposition in question is believed for different and incompatible reasons among the members of the group.Gilbert (1987;2013) is slightly different.She recognizes the importance of the question: what makes a group a group (what I would call the special composition question for groups).So commitment is important for her (also Tuomela 2004).But commitment is still a matter of individual attitudes.List and Pettit (2011) are concerned with the case where most members of the group but not all believe that p, since this can lead to the discursive dilemma, which arises when consistent sets of beliefs among group members lead to an inconsistent set of propositions when aggregated at the group level.So they require some mechanism for avoiding this.
The contrasting approaches are bottom-up.In effect they all ask how is it that group attitudes supervene on relevant attitudes of individuals.My approach is top-down.It takes knowing to be the resulting state of a properly functioning cognitive system and places some general conditions on what it is to be a cognitive system.It says (so far) nothing about what the attitudes of individual members of the groups may be.If, somehow, the individuals can together form a supra-individual cognitive system, then there is social (non-individual) knowing.This approach can nonetheless say what these other models are getting right, insofar as we read them as telling us what a system of aggregating individual beliefs would have to be like if it is to form a cognitive system.To be a cognitive system, a structure must have the function of producing factive representational states, i.e. outputting true propositions.A system that falls foul of the discursive dilemma is one that ends up with inconsistent 'beliefs'; it therefore cannot be a cognitive system since its design cannot achieve the aim of truth.Likewise a system that allows its 'beliefs' to be determined by inconsistent sets of reasons may not be reliable in producing true output states.A set of individuals that lacks unity will not be a system with a function.(I have put 'belief' in scare-quotes since these are putative beliefs.They are not in fact beliefs since the systems in question are not cognitive systems.A belief is the state a cognitive system is in either when it is functioning properly or when it fails to function properly-'botched knowing' in Williamson's terms (2000: 47).)Gilbert is right that in some cases (though far from all) that unity can be achieved by a shared set of commitments.But, as Bird has shown, that is not necessary, since the unity of a cognitive system can be achieved by the integration of parts (persons) in a distributed cognitive system.So the bottom-up approaches all correctly describe ways of being a cognitive system or constraints on particular ways of being a cognitive system.But they are all wrong in thinking that there is some particular way of organising persons so that they are a knowing group.

Lackey's epistemology of groups
In this section I introduce Lackey's (2021) epistemology of groups.As we will see, Lackey's approach is a contrast to that which I am promoting in this paper in two key respects.It is bottom-up whereas my view is top down.And it is belief-first whereas my view is knowledge-first.And so the contrast with Lackey's view is helpful, not only because Lackey's account is an important, well-argued position, the best option if one is to take a bottom-up, belieffirst approach, but also because the contrast allows us to see some of the key differences in the approaches.And, I argue, to see the advantages of the top-down, knowledge-first approach.
Lackey starts by giving her Group Agent Account of group belief: (GAA): A group, G, believes that p if and only if: (1) there is a significant percentage of G's operative members who believe that p, and (2) are such that adding together the bases of their beliefs that p yields a belief set that is not substantively incoherent.
This is clearly very similar to the (CSA).The important difference is clause (2) in (GAA).The idea here starts from consideration of a group all of whom believe that p, and so by (CSA) believe as a group that p. Imagine not only that the members of G have different reasons for believing p but also that these reasons are incompatible with one another.For example, patriotic Pete and revolutionary Rachel both believe that a Trump victory is best for the future of America, Pete because he thinks Trump will make America great again, and Rachel because she thinks that Trump's election will precipitate the proletarian revolution.Lackey holds that if S has a belief, then it must be possible to evaluate that belief as rational or irrational, justified or unjustified, undefeated or defeated, and so on.She says that an incoherent set of bases of the beliefs of the individuals is fragile.In this case, this 'base fragility' means that it is not possible to identify the group's reason for belief.If we were to say that the group Pete+Rachel believes that a Trump victory is best for the future of America, we should be able to ask what the group's reasons for this belief are.But this we cannot do.In addition, the fragility of the belief base means that group deliberation is undermined.Pete's belief will motivate him to campaign to persuade blue-collar American voters of the correctness of Trump's policies.But Rachel will not be so motivated.Since coherent deliberation and action on the basis of the proposition is undermined, we cannot attribute belief with that content to the group.It is for these reasons that Lackey rejects (CSA).2The (GAA) avoids the problems caused by base fragility by requiring its absence as a condition of group belief.This is what is achieved by clause (2).
Lackey's Group Epistemic Agent Account of justified group belief builds on (GAA): (GEAA): A group, G, justifiedly believes that p if and only if: (1) A significant percentage of the operative members of G (a) justifiedly believe that p, and (b) are such that adding together the bases of their justified beliefs that p yields a belief set that is coherent.(2) Full disclosure of the evidence relevant to the proposition that p, accompanied by rational deliberation about that evidence among the members of G in accordance with their individual and group epistemic normative requirements, would not result in further evidence that, when added to the bases of G's members' beliefs that p, yields a total belief set that fails to make sufficiently probable that p.
The details of (GEAA) will not be important to this paper, except to note that it clearly belongs to belief-first epistemology.(GEAA) says that a group has a justified belief that p when (GAA) is satisfied, i.e. it believes that p, plus two other conditions (i) its relevant members are justified in their individual belief that p, and (ii) the strengthened anti-fragility condition, (2), is satisfied.And again, Lackey's view is very bottom up.What is doing the bulk of the work in justified group belief are the individual beliefs of group members and the justifiedness of those individual beliefs.Lackey does not give an account of group knowledge.And that is quite reasonable.For the problems in getting to knowledge from justified belief are well known and there is no reason to suppose that they are any different at the group level.Lackey has focused on the questions that are distinctive at the group level: what is group belief?, what is group justification?It is a clear inference from what Lackey says that she holds that group knowledge entails true justified group belief.
To see one difference immediately between Lackey's view and mine, consider the following scenario.A team of police officers is investigating a criminal suspect.The team's chief gives each of her detectives a specific task: investigating the suspect's associates; looking into his finances; going through his social media and electronic communications; liaising with other agencies for any information they may have.The team meets each day to share the information gathered.The chief asks 'What do we know about the suspect?' On Lackey's view, this will usually be an inappropriate question (strictly speaking) in advance of the information being shared.At the time of asking the question, each individual knows only what she or he has discovered.So the team has no group beliefs about the suspect and so no group knowledge.Re-garding group knowledge, strictly she should be asking 'what will we know (once you have shared what you individually know)?' 3On the cognitive system view of group knowledge, the police chief's question is entirely appropriate.Thanks to the distributed nature of cognition in this case, the group can know in virtue of each individual knowing, so long as that individual knowledge is connected to the rest of the group as part of a properly functioning cognitive system.Here that exactly is the case.The daily meetings as well as other mechanisms mean that information that is gathered by one member of the team is circulated to other relevant members of the team.A daily pooling of information is one way of doing this.But even that process could be more distributed.If each member has a good idea of what the other members are working on, they can share information on a one-to-one basis when they realise that what they know might be needed by some other member of the team.Now imagine that it is discovered that the suspect has a connection with an overseas criminal.A foreign policeman police officer might be invited to join the meeting to share information.In such a scenario if would usually not be appropriate for the team's chief to include the guest within the 'we' of 'what do we know?'-the question would more suitably be, 'what do you know?'That is because the foreign police officer is not a member of the team, and so is not a part of that cognitive system.

Which groups have beliefs?
Lackey draws a three-way distinction between deliberative groups, nondeliberative groups and mere collections.Mere collections are arbitrary sets of individuals-all the people born on a Tuesday in a leap year, for example.Deliberative groups are those that can engage in group deliberation.Non-deliberative groups do not engage in group deliberation.They are distinct from mere collections because they share some property of significance, either to the members of the group or to someone outside of the group.Non-deliberative groups, says Lackey, 'can simply be brought into existence through internal or external interest'.Both deliberative groups and non-deliberative groups can have beliefs.
Lackey gives this example of left-handed students at Northwestern University: I send out a survey for all left-handed students to fill out and, after receiving the results, I aggregate their judgments via a supermajority procedure.I then report on this basis that left-handed Northwestern students believe that the campus is not suitably sensitive to their particular needs.It is not uncommon to think that there is nothing strained or mistaken about this belief attribution-as a group, left-handed Northwestern students do hold this belief, just not in the same way that a group capable of collective reasoning might do so.
In my view this is not an example of group belief.Indeed Lackey's supposed attribution of group belief is: 'left-handed Northwestern students do hold this belief'.But this is a plural claim.If Lackey were right about this being a belief attributed to a group entity, one would expect the singular.Instead what we have is a statement of a generic.
Mere external interest is surely insufficient to transform a mere collection into a group that can have beliefs.It makes it too easy to define a group.For example, the collection consisting of all people who believe that p should be a mere collection, for most p.So long I can concoct some reason for being interested in such a group, it will become a group.So, on Lackey's view, merely being interested in people who believe that p is enough to make it the case that there is a group that believes that p.That is clearly false.Not all non-deliberative groups (as defined by Lackey) can have group beliefs.
One might think that the capacity for deliberation is a better place, within Lackey's framework, to draw the line between which groups genuinely exist and those that are not truly groups but are mere collections.Recall that Lackey's reason for including clause (2) in her Group Agent Account (GAA) of group belief, the clause that excludes fragile bases of belief, is that fragile bases present problems for the possibility of group deliberation.That motivation could not apply in the case of non-deliberative groups.So there is good reason within Lackey's own framework for denying that non-deliberative groups can form beliefs. Nonetheless, I do not think that proposal leads to a satisfactory account of which groups are genuine groups that possess beliefs.For some groups involved in a cognitive task might be organised on a non-deliberative basis.The police team considered in the preceding section is an example of this.Imagine that we are setting up a team that has a cognitive task, possibly among other tasks.For some purposes it might be optimal for all members of the team to consider and debate all the propositions in which the team has an interest so the the group view is some aggregation of the individual views, for example as Lackey's (GAA) proposes.However, as we saw above in Section 3, there can be conditions under which it is preferable to organise the group in a distributed way, with experts or specialists taking on distinct cognitive subtasks.Under the conditions mentioned, the group neither needs to engage in group deliberation (a single specialist can establish the truth of relevant propositions for the whole group).Nor could it engage in group deliberation (only the specialists can have a rational opinion on the relevant propositions).Note that even if the latter were not the case-everyone could have a view on any proposition-it will still be effective and efficient to divide the cognitive labour so that in fact only a few individuals have enough information to form a opinion on any proposition.The value of the division of cognitive labour works against group deliberation.
The cognitive system view of group belief/knowledge advocated in this paper fares better than Lackey's view in accounting for which groups can be possessors of groups belief or knowledge.Note first that the cognitive system view can agree with Lackey that deliberative groups can be groups with beliefs.It can also agree that a fragile basis for the members' beliefs can undermine the possibility of group belief.For group deliberation is one way for a group to be organised as a cognitive system, and for some kinds of cognitive problem it can be a very effective structure.(The conditions obtain when forming a correct opinion on the basis of the available evidence is not easy and when many members of the team are able to have an informed opinion on the relevant propositions.)Base fragility is is a problem for group belief for the reasons Lackey gives.If a team is intended to function as a cognitive system by group deliberation, then that team's ability to get to the truth can be compromised when its base is fragile.First, as Lackey describes, deliberation will be undermined.Secondly, base fragility implies that the bases of some individuals' beliefs are false.If somehow deliberation does proceed, this falsity threatens the ability of the group to reach knowledge.The deliberating group may reach a false conclusion if some of the individuals' bases are false.Even if the conclusion reached by the group is true, it may not be knowledge, since the false individual basing beliefs may act as defeaters or false lemmas.According to the cognitive system view, for a group to have a belief it must possess a cognitive structure of a type that is capable of reaching knowledge.If a group is such that it is quite likely to have fragile basis for its members' beliefs and this is because it has no filter for promoting coherence of base beliefs, then even if its deliberations do end in agreement on a proposition that is in fact true, that agreement will not amount to knowledge.If that structure is incapable of reaching knowledge for that reason, then the structure is not a cognitive structure in the sense being employed here.In that case the agreement of the group does not amount to group belief.So I concur with Lackey (to an extent), that groups with fragile bases may not be group believers or knowers, for different, though relatd reasons.
Note that the cognitive system view does not imply that deliberative groups with fragile basis can never have group beliefs or knowledge.Group deliberation may be robust enough to permit some inconsistency in the basis of individuals, and if so that deliberation can lead to knowledge. 4In general, a cognitive system does not have to be perfectly reliable.It can have limitations and it can have one-off-failures, yet still be a cognitive system that produces beliefs and, in the right circumstances, knowledge.Imagine the following: the internal intelligence agency, MI5, has a practice of briefing all member of a team with the same information, in order to prevent base fragility.One such team is briefed with the assertion that Oleg Gordievsky is a Soviet spy.But one member of the MI5 team is also briefed by the external secret intelligence service, MI6, that Oleg Gordievsky is a double-agent-he is a British agent, not a Soviet spy.But this information cannot be shared with the rest of the MI5 team for operational reasons.So the MI5 team has a fragile base for propositions such as 'Gordievsky is a threat to British security'.Nonetheless, the MI5 team can believe that proposition.It is the proposition that informs its decisions and so forth.So a structure can be a cognitive system yet fail to have a coherent set of bases of belief on occasion.Yet if the structure had no mechanism for reducing the probability base fragility, was riven by internal factions and disagreements, had members who got their information from different and inconsistent sources, then such a structure could not achieve knowledge and so would not be a cognitive structure.It would thus not be capable of belief either.If by coincidence all the members of such a group shared some belief, that would not be a belief of the group.
To conclude: I have argued that Lackey's view has difficulty in determining correctly which groups can have beliefs.Her official view allows any collection to be a group that can have a belief so long as someone has some reason for regarding that collection as a group.That is far too liberal.A restriction to those groups that can engage in deliberation would be better motivated by Lackey's reasons for including the anti-base-fragility clause in (GAA).But that restriction will exclude those groups that do not engage in group deliberation because they are structured by the division of cognitive labour, such as the police team.The cognitive system account fares better.A group can have a belief if it has a cognitive system.The police team is structured so as to fulfil a cognitive function.The left-handed Northwestern students are not.So the former can have beliefs but the latter cannot (as a group).The cognitive system view gives partial endorsement to Lackey's anti-fragility requirement in the case of groups organised to engage in deliberation.That is because if a deliberative group is to be a cognitive system, it will need to exclude (highly) fragile bases, especially if their deliberative processes are not robust.But the anti-fragility requirement is not a general requirement on group belief, since not all social cognitive systems are organised as deliberative groups.

Group knowledge and action
Since it adopts a top-down approach, the cognitive system view does not directly say anything about the mental states of the individuals in a group that knows.In a group whose cognition is achieved by deliberation, it is likely that there must be a close connection between the states of knowledge/belief of the individuals and of the group.But in groups whose cognition is achieved by division of labour there need not be such a close connection.Indeed Bird (2010Bird ( , 2014) ) argues that it is possible for a group to know without any individual member of the group knowing.In this section I discuss Lackey's objection to Bird's claim, and conclude in Bird's favour.
Bird shows that science uses extensive division of labour and his example concerns the group that is a particular scientific community (though he also argues that the point holds for the scientific community as a whole) (Bird 2010: 32): Dr. N. is working in mainstream science, but in a field that currently attracts only a little interest.He makes a discovery, writes it up and sends his paper to the Journal of X-ology, which publishes the paper after the normal peer-review process.A few years later, at time t, Dr. N. has died.All the referees of the paper for the journal and its editor have also died or forgotten all about the paper.The same is true of the small handful of people who read the paper when it appeared.A few years later yet, Professor O. is engaged in research that needs to draw on results in Dr. N.'s field.She carries out a search in the indexes and comes across Dr. N.'s discovery in the Journal of X-ology.She cites Dr. N.'s work in her own widely-read research and because of its importance to the new field, Dr. N.'s paper is now read and cited by many more scientists.
Dr N.'s discovery is a contribution to scientific knowledge when he published it, and also when later Prof. O. cites it.What is the state of the scientific community's knowledge at t, during the intervening period, when Dr N. and others who read the original paper are dead or have forgotten it?It is wrong to think that just because certain people have died or forgotten something that the state of scientific knowledge of the community has changed.For that piece of information was exactly available to Prof. O. when she needed it.(It would not have made any difference had Dr N. still been alive.)The information didn't become part of scientific knowledge for a second time, when Prof. O. read the journal.It remained part of science throughout.That being so, we have, at t, a case of the scientific community having knowledge that no individual has.
Jennifer Lackey (2014;2021) and Jonathan Birch (2023) deny that this conclusion can be correct, undermining Bird's social knowing view that pro-duces this conclusion.By extension their criticism undermines the cognitive system view developed here.
The objection proceeds thus.Let G be a group that know that p where the individuals in G do not know that p.Let the individuals in G act as if it is the case that p, e.g. they all irrationally believe that p and act, collectively, accordingly.Such actions would be irrational actions performed by G. Yet, how can an action performed by G on the basis that p thereby be irrational if G knows that p? The circumstances just described conflict with principles relating knowledge to rational action, such as: Knowledge/Action Principle: S knows that p only if S is epistemically rational to act as if p or, equivalently, S is epistemically rational to act as if p if S knows that p. (Lackey 2021) and: Reason-Knowledge Principle: Where one's choice is p-dependent, it is appropriate to treat the proposition that p as a reason for acting iff you know that p. (Hawthorne and Stanley 2008) Since G's collective action is clearly irrational, we should conclude that G does not know that p. Hence Bird's claims about social knowing are mistaken.
Here is an example to illustrate the point, lightly adapted from Lackey.Let us take the story of Dr N.'s discovery as given.Consider the intervening period when no individual knows of the discovery.Bird claims that the relevant scientific community nonetheless does have knowledge.Let the discovery be the claim that enzyme E plays important role R in the development of cancer cells.Now imagine that an acknowledged world-leading expert in cancer biology, Dr P. is asked, at the foremost conference in this field, a question about the relationship between E and cancer.Dr P. responds, on the basis only of the flimsiest hunch and no evidence whatsoever (that he has) that E plays role R in cancer cell development.Dr P.'s reputation is such that he is regarded as the spokesperson for this scientific community.That position, reputation, and the conviction with which he made the assertion means that it soon becomes widely accepted that E plays role R, still with no evidence.Actions are performed on this basis-research teams perform experiments predicated on this proposition and pharmaceutical companies start developing drugs on the assumption of its truth.
Clearly, say Lackey and Birch, community actions of this sort would be irrational.And I agree.But if Bird's claim about the case of Dr N. is correct, then the two principles above lead to the conclusion that the community is rational and that the fact that E does play role R is a reason for the community to act as it does.So if the principles are correct, we should conclude, by modus tollens, that in fact Bird's claim about this case is wrong, and that the community does not know that E plays role R.
Something like these principles must be true.Indeed it is built into the functionalism of the cognitive system view that there is a close connection between being a system capable of knowing and the ability to use the propositions that are known as the basis of action.Nonetheless, these principles as stated are false, regarding both individuals and groups.
It is possible to act irrationally for the reason that p while also knowing that p when one's knowledge that p plays no role in motivating that action.For one can know something, but fail to act on that knowledge, even when one ought to.This is illustrated, in the individual case, by the following example.S has fallen under the influence of a (fake) nutrition guru, so that S acts without further reflection on what the guru says.What the guru tells S is made up at random and so is mostly false, e.g.'eating 100g of grass a day will prevent tennis elbow' or 'injecting disinfectant will cure covid'.But by chance the guru says something that is true, 'drinking beetroot juice can lower your blood pressure'.It so happens that S was told this some time ago by someone else who is entirely reliable and whom S still trusts.So S knows this proposition.Had S, before listening to the guru about beetroot, been asked, 'is drinking beetroot juice beneficial?', S would have reflected for a moment or two and then recalled the fact he knows.But as it is, S simply acts on what the guru says and engages in no reflection at all, and so does not engage in reflection that would have told him that he already knew this.So we have a case where S treats p (beetroot juice can lower your blood pressure) as a reason for action and S knows that p, but S is irrational and does not appropriately treat the proposition that p as a reason for acting.In this case S's knowledge is not accessed and so is not irrelevant.We should conclude that the knowledge-action principles given above are not strictly correct and need amendment.The following is closer to the truth: Correct Knowledge-Reason-Action Principle: If S knows that p and acts on the basis of this knowledge then S's action is an epis-temically rational action and S appropriately uses p as a reason for this action.
The Correct Knowledge-Reason-Action Principle raises no problem in the case of Dr P. and his community's irrational actions even assuming, as Bird claims, the community does know (in virtue of Dr. N.'s publication) that E plays role R. For in that example, the community's knowledge played no role in its actions.That case is exactly like the case of S and the guru.S's actions were irrational despite having relevant knowledge, since that knowledge played no role in causing or motivating S's action.So Bird's claim and consequently the present cognitive system view can be defended against this criticism that employed the connection between knowledge and rational action.In fact, I think, the boot is on the other foot.That is, Lackey's own (GAA) is in tension with these principles.Consider this example.It is early 1961 in East Germany.West Germany's Wirtschaftswunder (economic miracle) has caused many East Germans to feel left behind economically, while the East German government restricts political freedoms and uses the Stasi to spy on its population.It is rumoured that a wall is being built to prevent East Germans leaving for the West.The members of the work brigade at the railway carriage factory all individually know the following propositions 'Life would be much better in West Germany', 'We have poor pay and working conditions', and 'Our political freedoms are restricted'.However, they all suspect that one or more members of the brigade might be Stasi informants (although, unusually, this is in fact not the case in this instance).So they never share their views on these propositions-indeed, out of fear of the Stasi, each says things to the others inconsistent with these propositions.And so collective action on the basis of these 'mutually known' propositions is impossible. 5ote that (GAA) tells us that the work brigade believes the three propositions.They all believe those propositions and the bases for these beliefs are not fragile-they all have the same reasons for believing them.Yet this seems wrong-the work brigade does not believe them as a group.And a key reason is that what they all believe individually cannot be used as a reason for group action.While this set of circumstances is not strictly inconsistent with the three knowledge-reason-action principles, because there is no actual group action related to p, and afortiori not any irrational action, there is nonetheless a tension.For if action is impossible, in what sense is knowledge even potentially a rational guide to action?
It is important to be clear about the kind of 'impossibility' at issue here.A paralysed individual may be incapable of physical action.But such a person may still be capable of mental actions and they can still be disposed to act.But our work brigade is not like this.It does not engage in mental actions nor is it disposed to act.Consider a different case, a group of POWs in Colditz Castle.They consider, collectively, possible plans for escape, there is group deliberation, perhaps some division of labour to explore whether particular guards can be bribed and so on.In the end they conclude that Colditz is too well guarded and that no escape is possible without being killed.This group is unable to engage in physical action (escaping), but it is able to engage in mental action (deliberating on various ideas for escape) and is disposed to act (had some less risky possibility for escape been available, they would have acted on that).But the work brigade is clearly not like this-it cannot even engage in group deliberation or any other kind of group cognition.It is not like the paralysed person; it is more like a person whose brain has been divided into multiple fully independent parts that do not even communicate with one another.That is not a description of a person who can have beliefs.
To conclude.The cognitive system approach to group knowledge taken here endorses Bird's model of 'social' (group) knowing in science as one possible way for group cognition to be organised.If Lackey's and Birch's criticism of the latter model were correct, it would be a strike against the current approach.Bird's model can be defended because the principles appealed to by Lackey and Birch are not quite correct in a way that is crucial to their argument.Both an individual and a group can know that p and yet be irrational in their acting on p if their knowledge is causally quiescent with respect to that action.Both Lackey's version of the Dr N. story and my story of the individual influenced by a guru are cases of genuine knowing where that knowing is causally isolated and inactive and so does not undermine the claim that the group/individual acts irrationally.
The approach I take here does allow for an important connection between knowledge and rational action-its account of a cognitive system says that a cognitive system is one whose products can be used as the inputs to action or practical reasoning.I noted that in contrast Lackey's account of group belief is consistent with no co-ordination among members of a group.So a group could have a belief yet be incapable of action because coordination on action is impossible.While not inconsistent with the three principles discussed there, this consequence does show that the link between belief and action is very weak indeed on Lackey's view-too weak to be plausible.

Conclusion
This paper has argued for a new approach to thinking about the epistemological status of groups.Groups are potential knowers if they are or have what I have called a cognitive system.The characterization of a cognitive system puts constraints on what a group has to be like in order to know but does not mandate any particular structure.Both deliberative, committee-like groups and groups with distributed cognition can be cognitive systems (if organised in the right way).
This approach could be married to a standard functionalist account of mental states.For it is functionalism that allows that mental states does not depend on what the stuff is that realizes that state.It also supports the analogy between states of humans and the states attributed to groups-they are functionally isomorphic.
However, I have argued that the cognitive system approach to group knowing is better supported by a stronger, naturalized functionalism.According to this view the 'functions' of functionalism are also functions in the stronger sense employed in evolutionary biology.This permits us to see cognitive systems as structures that have the function of producing factive representational states.This also supports a knowledge-first approach.Both individuals and groups can be knowers in virtue of possessing cognitive systems that are functioning correctly.