A universal preference for animate agents in hominids

Summary When conversing, humans instantaneously predict meaning from fragmentary and ambiguous mspeech, long before utterance completion. They do this by integrating priors (initial assumptions about the world) with contextual evidence to rapidly decide on the most likely meaning. One powerful prior is attentional preference for agents, which biases sentence processing but universally so only if agents are animate. Here, we investigate the evolutionary origins of this preference, by allowing chimpanzees, gorillas, orangutans, human children, and adults to freely choose between agents and patients in still images, following video clips depicting their dyadic interaction. All participants preferred animate (and occasionally inanimate) agents, although the effect was attenuated if patients were also animate. The findings suggest that a preference for animate agents evolved before language and is not reducible to simple perceptual biases. To conclude, both humans and great apes prefer animate agents in decision tasks, echoing a universal prior in human language processing.


Figure S1
: General schematic procedure for a testing session (related to STAR Methods).A session started with a green screen that needed to be pressed to get sound feedback and food reward (for great apes only).Then, great ape participants went through five trials of random images of 380 x 380 pixels and three trials of the third phase of the training.Approximately ten test trials followed.The session ended with five new trials of random images and the red screen.The clips were played full screen, the still images were displayed at 1920 x 920 pixels and the fruit and cup images at 600 x 600 pixels.The human participants started with a green screen and directly went to the test trials.Note.We reported the mean posterior estimates and estimated errors of all fixed parameters.We additionally included the posterior probability that a given coefficient is above or below 0 on the logit scale, corresponding to a .5 probability of agent choice.In the model all Pareto k estimates were good (k < 0.5), suggesting that results were not driven by overly influential datapoints and that our elpd estimates are reliable.AN stands for animate, IN for inanimate and ">" for "acting on".
Table S4: Detailed posterior estimates of the Bayesian Bernoulli regression modelling the agent choice across species and the type of interaction (related to Fig. 1B).Note.Clips from the Animate>Animate condition (">" stands for "acting on").We reported the mean posterior estimates and estimated errors of all fixed parameters.We additionally included the posterior probability that a given coefficient is above or below 0 on the logit scale, corresponding to a .5 probability of agent choice.In the model all Pareto k estimates were good (k < 0.5), suggesting that results were not driven by overly influential datapoints and that our elpd estimates are reliable.Note.Children saw only subsets of all the clips for constraints of time and attention span (as participants were recruited amongst the zoo visitors).AN stands for animate and IN for inanimate, where ">" stands for "acting on".

Figure S2 :
Figure S2: Results of the variables selection (related to STAR Methods).

Figure S3 :
Figure S3: Posterior probability of the agent choice for each species across conditions and age, extracted from the gam Bayesian model (related to Fig. 1A).Ages were centred for each participant in the model.The dashed line corresponds to a random choice (50%).The thick lines represent the mean of the estimates and the three shade levels represent the 30, 60 and 90% credible intervals.Twenty human adults, fifty human children, four chimpanzees, four gorillas and five orangutans were tested for each condition, but one adult was removed from the AN>AN condition.Human adults and children were merged in this analysis.Twenty-five clips were presented in the AN>IN condition, 41 in the IN>IN, 74 in the AN>AN and 15 in the IN>AN condition.AN stands for animate, IN for inanimate and ">" for "acting on".

Figure S4 :
Figure S4: Different setups used in the study (related to STAR Methods).(A) setup used for the humans, (B) fixed setup in the chimpanzees' enclosure, (C) fixed setup in the gorillas' enclosure and (D) movable setup used to test the orangutans.Photo credits: (A-C-D) S. Brocard and (B) Zoo Basel.

Figure S5 :
Figure S5: Example of a still image across three-stage training (related to STAR Methods).(A) phase 1 with the white background; (B) phase 2 with the blurred background and (C) phase 3 with the natural one.Photo credit: Orangutan Jungle School, Season #1, NHNZ Worldwide.

Figure S6 :
Figure S6: Posterior predictive checks for (A) the conditions model and (B) content model (related to STAR Methods).The first panels compare the posterior predictive distribution (yrep) with the observed data (y).The second is predictive distribution of the mean compared to the observed mean.The third and fourth are divided by species and conditions (for A) and content of the event (for B).All posterior predictions capture the data very well.

Table S1 : Details of the variable used and coding (related to STAR Methods).
NumericDifference (agent -patient) of total time moving in the event.If negative the patient is moving more than the agent and if positive it is the agent that moves longer than the patient.

Table S2 : Detailed posterior estimates of the Bayesian Bernoulli regressions for each species in both models (related to Fig. 1).
Note.Agent choice across conditions and across types of interaction in the Animate>Animate condition.We reported the mean posterior estimates and estimated errors of fixed parameters of interest.All posterior estimates are given in Tab.S3 and S4.AN stands for animate, IN for inanimate and ">" for "acting on".

Table S6 : Detailed information of great apes who participated in the study (related to STAR Methods).
Note.Names in italic are the immature individuals.F: female; M: male.