ALL Metrics
-
Views
-
Downloads
Get PDF
Get XML
Cite
Export
Track
Systematic Review

Predictive physiological anticipation preceding seemingly unpredictable stimuli: An update of Mossbridge et al’s meta-analysis

[version 1; peer review: 2 approved with reservations]
PUBLISHED 28 Mar 2018
Author details Author details
OPEN PEER REVIEW
REVIEWER STATUS

Abstract

Background: This is an update of the Mossbridge et al’s meta-analysis related to the physiological anticipation preceding seemingly unpredictable stimuli. The overall effect size observed was 0.21; 95% Confidence Intervals: 0.13 - 0.29
Methods: Eighteen new peer and non-peer reviewed studies completed from January 2008 to October 2017 were retrieved describing a total of 26 experiments and 34 associated effect sizes.
Results: The overall weighted effect size, estimated with a frequentist multilevel random model, was: 0.29; 95% Confidence Intervals: 0.19-0.38; the overall weighted effect size, estimated with a multilevel Bayesian model, was: 0.29; 95% Credible Intervals: 0.18-0.39. Effect sizes of peer reviewed studies were slightly higher: 0.38; Confidence Intervals: 0.27-0.48 than non-peer reviewed articles: 0.22; Confidence Intervals: 0.05-0.39. The statistical estimation of the publication bias by using the Copas model suggest that the main findings are not contaminated by publication bias.
Conclusions: In summary, with this update, the main findings reported in Mossbridge et al’s meta-analysis, are confirmed.

Keywords

pre-stimulus activity, anticipatory physiology, temporal processing, psychophysiology, presentiment

Introduction

The human ability to predict future events has been crucial in our evolutionary development and proliferation over epochs of time, both from a species perspective, but also, on an individual level. Our day-to-day survival is predicated on a successful marriage of experience (e.g., memory) and sensory processing (e.g., perceptual cues); for example, on a very humid heavily overcast night, our perceptions and memories inform us that a thunder storm is possible and it might be intelligent to find shelter. Such behaviour is highly adaptive as it fosters survival based strategies and is perfectly explicable in terms of current theories of biological causality. Now imagine if such prognosticating ability was possible without any sensory or other inferential cues. Such seemingly inexplicable ability would definitely hold survival advantage, if they existed. For millennia people have been reporting strange feelings of foreboding that later transpired to have significance. Over the last 36 years these phenomena have been scrutinized in the laboratory in which a subject’s physiology is monitored before a randomly presented stimulus that is designed to evoke a significant post-stimulus response. Disturbingly, moments before the stimulus is presented there are murmurings of activity, as if the body is predicting moments ahead of time. This effect is termed presentiment, or more recently, Predictive Anticipatory Activity (Mossbridge et al., 2014). By 2012 a good number of these studies had been completed and it was deemed worthwhile to conduct a meta-analysis of the extant literature at the time. Mossbridge, Tressoldi and Utts located 42 studies published from 1978 to 2010, testing the presentiment hypothesis, out of which 26 enabled a true comparison between pre and post-stimulus epochs (Mossbridge et al., 2012), that is the pre-stimulus physiological responses mirrored even if to a lesser degree, the post-stimulus responses.

Here two paradigms were used: either a randomly ordered presentation of arousing vs. neutral stimuli or guessing tasks in which the stimulus is the feedback about the participant’s guess (correct vs. incorrect). In both of these approaches it is difficult to envision mundane strategies that might explain the anomalous pre-stimulus effects observed, and indeed, Mossbridge et al, went to significant lengths in refuting the leading candidate – expectancy effects, both in the 2012 meta-analysis and in post-review exchanges with sceptical psychologists and physiologists. Regardless of the paradigm, a broad range of physiological measures were employed from skin conductance, heart rate, blood volume, respiration, electroencephalographic (EEG) activity, pupil dilation, blink rate, and/or blood oxygenation level dependent (BOLD) responses. These are recorded throughout the session, with a pre-determined anticipatory period of between 4 to 10 seconds, in which the any pre-stimulus effect is captured. The presentiment hypothesis calls for a difference between arousing and neutral pre-stimulus responses and this is calculated across sessions. Mossbridge et al. found substantive evidence in favour of a presentiment effect concatenated to over 6 sigma – extreme statistical significance. Additionally, they also found evidence of presentiment effects from mainstream research programs – something that is becoming increasingly important as these effects become more widely known.

Because of the high profile nature of Mossbridge et al, (over 93,000 views as of January 2018) there has been a good number of replications in the few years since publication. We located an additional 26 studies describing 34 effect sizes from a dozen laboratories. The most striking aspect of this fresh database is the sheer variation in experimental approaches as researchers seek to tackle more process oriented questions rather than continuing the proof-oriented work found in the earlier meta-analysis. Because expectancy effects have been forwarded to explain at least some of the presentiment effect, it is noteworthy that several experiments in this fresh cohort of studies tackle this head on by only analysing the first trial of a run. These single-trial presentiment studies are expectancy free and are becoming more dominant in this research domain. Another interesting question that is probed in these new studies is the idea of utilizing pre-stimulus physiological activity to predict future events. This provides a second objective measure of the validity of the presentiment effect. There are several studies that utilize this approach and they are discussed later on. Additionally, we also found increasing evidence of presentiment research piggybacking onto mainstream psychology programs, even informing aspects of the conventional research. Also of note we found several PhD theses describing presentiment research and a greater geographical spread than in 2012, both evidence of the increasing attention such research is garnering. Lastly, we found increasing dialogue between presentiment researchers and physicists interested in retrocausality – the idea that effects can precede their cause. This is witnessed in the recent AAAS retrocausality symposium in which several researchers participated and in which some of those papers made their way into this meta-analysis (Sheehan, 2017).

Methods

The whole procedure followed both the APA Meta-Analysis Reporting Standards (APA Publications and Communications Board Working Group on Journal Article Reporting Standards, 2008), the Preferred Reporting Items for Systematic reviews and Meta-Analyses for Protocols 2015 (Moher et al., 2015) and the reporting standards for literature searches and report inclusion (Atkinson et al., 2015). A completed PRISMA checklist can be found in Supplementary File 1.

Study eligibility criteria

Study inclusion criteria were the analysis of both psychophysiological or neurophysiological signals before the random presentation of whichever type of stimulus, e.g. pictures, sounds etc. Randomization could be performed by using pseudo-random algorithms e.g. like those implemented in MatLab or E-Prime® or true random sources of random digits, e.g. TrueRNG.

It is important to point out that these eligibility criteria are different from those used by Mossbridge et al. Those authors selected only studies were the anticipatory signals mirrored the post-stimulus ones. Differently we included all studies that used anticipatory signals to predict future events independently of the presence of post-stimulus physiological signals. For example, some authors, e.g. Mossbridge (2015) used heart rate variability to predict winning i.e. $4, versus losing outcomes. Our inclusion criteria are consequently more comprehensive than those used by Mossbridge et al.

Studies retrieval procedure

Both co-authors who are experts in this type of investigations, searched for studies through Google Scholar and PubMed by using the keywords: “presentiment” OR “anticipation” OR “precognition”. Furthermore, we emailed a request of the data of completed studies to all authors we knew were involved in this type of investigations. Even if Mossbridge et al. included all studies available up to 2010, we also searched studies that could have been missed in that meta-analysis. We searched all completed studies, both peer reviewed and non-peer reviewed, e.g. Ph.D dissertations, from January 2008 to October 2017.

Study selection

Study selection is illustrated in the flow-diagram presented in Figure 1

ee475735-e6d9-438e-b439-05d2715079d4_figure1.gif

Figure 1. Flow-diagram of study selection.

Excluded records were studies were the psychophysiological variables were analysed only after and not before the stimuli presentations (Jin et al., 2013) and with an unusual procedure (Tressoldi et al., 2015), i.e. using heart rate feedback to inform a voluntary decision to predict random positive or negative events.

Records excluded after the screening were studies where authors did not agree to share their data for different reasons (Baumgart et al., 2017; Modestino et al., 2011). Excluded studies revealed either statistically significant or trending evidence in support of the anticipation effect in most cases, thus reducing the concerns surrounding biased removal.

The references of the included studies are reported in Supplementary File 2.

Coding procedure

The two co-authors agreed on the following coding variables: Authors; year of publication; participant selection: yes = selected according to specific criteria; no = selected without specific criteria; number of participants; number of trials; stimuli type; type of randomisation: pseudo or true random; psychophysiological signals, e.g. EEG, Heart Rate, etc.; anticipatory period; type of statistics; value of statistics and independently extracted them from the eligible studies. After the comparison, they discussed how to solve the inter-coder’ differences.

On the database we have added a note for each effect size, describing where we extracted the corresponding statistics in the original papers. The database along with all 18 papers are available from Tressoldi (2017). A summary of the selected studies along with their corresponding effect sizes, variance and standard error, is reported on Table S1 in the Supplementary File 3.

Moderator variables

Apart from the overall effect, we chose to compare the following moderator variables, peer review (PeerRev, yes vs no) as a control of study quality. Given the low number of studies no further moderator analyses were carried out.

Statistical methods

The standardized effect size d of each dependent variable, was estimated from the descriptive statistics (means, standard deviation and number of participants) when available. In all other cases, it was estimated by using the available summary statistics, i.e. paired t-test; Stouffer’s Z; etc. by using Lakens’ software (Lakens, 2013) and the function escalc () of the R package metaphor (Viechtbauer, 2017).

All effect sizes were then converted into the Hedges’ g and the corresponding variance by using the formulae suggested by Borenstein et al. (2009) estimating an average correlation of 0.5 between the dependent variables.

Given our choice of keeping (not averaging) all effect sizes when multiple dependent variables were analysed, we estimated the overall random model weighted effect size by using the robumeta package (Fischer et al., 2017) which implement a Robust Variance Estimation method when there are dependent effect sizes (Tanner-Smith & Tipton, 2014).

In order to control the reliability of the results, a second analysis was carried out by using a multilevel approach as suggested by (Assink & Wibbelink, 2016) implemented with the metafor package (Viechtbauer, 2010) and reported in the Table S2 in the Supplementary File 3.

The Bayesian meta-analysis was implemented with the brms package (Bürkner, 2017).

A copy of the syntax is available here: https://doi.org/10.6084/m9.figshare.5661070.v1 (Tressoldi, 2017)

Even if with our search activity we are quite sure to have reduced to a minimum the problem of publication bias, we performed a statistical estimation by using the Copas selection model which is recommended by Jin et al. (2015).

Results

Descriptive statistics

Studies: Peer review papers: 8; Non -Peer review papers:10. Number of experiments: 26 contributed by 13 authors. Number of effect sizes: 34. Average number of participants: 97.5. Average anticipatory period: 3.5 seconds. Four studies were preregistered (see database).

The group analyses for males and females reported in three papers (Mossbridge, 2014; Mossbridge, 2015; Singh, 2009), were considered independent effect sizes.

Frequentist multilevel random model

The forest plot is presented in Figure 2. The summary of the frequentist multilevel random model analysis is presented in Table 1 compared with the results obtained by Mossbridge et al., whereas the summary of the Bayesian multilevel random model meta-analysis is presented in Table 2.

ee475735-e6d9-438e-b439-05d2715079d4_figure2.gif

Figure 2. Forest plot of the frequentist multilevel random model analysis.

Table 1. Results of the frequentist multilevel random model analysis.

nES95% Conf.
Int.
pI2τ2
Mossbridge
et al.
260.210.13 – 0.295.7×10-827.40.012
Overall260.290.19 – 0.388×10-682.50.049
Peer Review120.380.27 – 0.481×10-543.50.012
No Peer Review140.220.05 – 0.390.01485.20.048

n= number of experiments; ES= estimated effect size with corresponding 95% confidence intervals, p values; I2: effect sizes heterogeneity; τ2: effect size variance heterogeneity.

Table 2. Results of the Bayesian Multilevel Random Model.

nEffect size95% CIRhat
Overall260.290.18 – 0.391
Peer Review120.360.23 – 0.481
No Peer Review140.230.04 – 0.421

Rhat = ratio of the average variance of samples within each chain to the variance of the pooled samples across chains. CI – Credible Intervals.

Sensitivity analysis of the overall effect size, didn’t reveal any change from Rho 0 to Rho 1, suggesting that the degree of correlations among the dependent effect sizes don’t affect its magnitude.

Another “sensitivity analysis” was carried out excluding the Mossbridge and the Tressoldi studies in order to control whether different authors could obtain similar results. The main results of this analysis by using the same frequentist multilevel random model, is reported in Table 3.

Table 3. Results of the frequentist multilevel random model without Mossbridge and Tressoldi studies.

nEffect size95% CIpI2τ2
Overall190.230.05 – 0.400.01782.70.065

I2 = percentage of variation across studies that due to heterogeneity; τ2 = Tau2, variance of the true effect sizes. CI – Confidence Interval.

Both the frequentist and the Bayesian analyses support the evidence of an overall main effect of approximately .29, and a small difference between the peer and non-peer reviewed studies. These findings will be commented further in the discussion of the comparison with Mossbridge et al.

Publication bias

The search method used and the small number of people interested in this research field, guarantee that from an empirical point of view, any publication bias is almost absent.

Unfortunately, there is no consensus about what tests are statistically more valid (Carter et al., 2017).

All the traditional tests, like the Fail-Safe, the Trim-and-Fill, the Funnel Plot have been criticized for their limitations (Jin et al., 2015; Rothstein, 2008). We hence applied the Copas selection model which is recommended by Jin et al. (2015).

The Copas selection model was implemented using the metasens package (Schwarzer et al., 2016). The results are presented in the Table 4. With this statistic, it emerges that there is no apparent statistical publication bias.

Table 4. Estimated effect size and corresponding 95% Confidence Intervals (CI) of the Copas Model.

Effect size95% CI
Copas Model adjusted0.280.20 – 0.37

Discussion

This update of the Mossbridge et al. (2012) meta-analysis related to the so called predictive anticipatory activity (PAA) responses to future random stimuli, covers the years 2008- October 2017. Overall, we found 18 new studies describing a total of 34 effect sizes. Differently from the statistical approach of Mossbridge et al., in this meta-analysis we used a frequentist and a Bayesian multilevel model which allows an analysis of all effect sizes reported within a single study instead of averaging them.

Both the frequentist and the Bayesian analyses converged on similar results, making our findings quite robust. The overall effect size 0.29, 95% CI = 0.18 - 0.39, overlaps to that reported in the original paper: 0.21, 95% CI = 0.13–0.29, even if the heterogeneity is substantially higher: I2= 80.5 vs 27.4.

The high level of heterogeneity is expected considering the varieties of experimental protocols and the diversity of dependent variables, from heart rate to pupil dilation.

Furthermore, we did not find substantial differences between peer and not-peer reviewed papers as in the original paper.

We found very interesting evidence of presentiment distilled from the conventional post-stimulus psychological research of Jolij and Bierman, who have performed a long series of experiments using a face detection paradigm. Additionally, the work of Kittenis found prestimulus effects from a conventional research program and pre-registered single-trial work of Mossbridge represent an important conceptual replication in countering both the use of questionable research practices and expectancy effects arguments.

A promising development of this line of research is the development of paradigms that use software in real-time to predict meaningful future outcomes before they occur, e.g. (Franklin et al., 2014)

Conclusion

This update confirms the main results reported in Mossbridge et al. (2012) original meta-analysis and gives further support to the hypothesis of predictive physiological anticipation of future random events.

The limitations of the present meta-analysis are similar to most meta-analyses which include non pre-registered studies that cannot be controlled for the degree of freedoms in the methodology and data analysis in the course of their implementations, making them prone, for example, to the so-called “questionable research practices” (John et al., 2012).

The solution is that of prospective meta-analyses (Watt & Kennedy, 2017), based on preregistered studies where the methods and data analyses have been declared and made public beforehand.

Data availability

Underlying data for this meta-analysis is available from FigShare: https://doi.org/10.6084/m9.figshare.5661070.v1 (Tressoldi, 2017) under a CC BY 4.0 licence

Comments on this article Comments (0)

Version 2
VERSION 2 PUBLISHED 28 Mar 2018
Comment
Author details Author details
Competing interests
Grant information
Copyright
Download
 
Export To
metrics
Views Downloads
F1000Research - -
PubMed Central
Data from PMC are received and updated monthly.
- -
Citations
CITE
how to cite this article
Duggan M and Tressoldi PE. Predictive physiological anticipation preceding seemingly unpredictable stimuli: An update of Mossbridge et al’s meta-analysis [version 1; peer review: 2 approved with reservations] F1000Research 2018, 7:407 (https://doi.org/10.12688/f1000research.14330.1)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.
track
receive updates on this article
Track an article to receive email alerts on any updates to this article.

Open Peer Review

Current Reviewer Status: ?
Key to Reviewer Statuses VIEW
ApprovedThe paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approvedFundamental flaws in the paper seriously undermine the findings and conclusions
Version 1
VERSION 1
PUBLISHED 28 Mar 2018
Views
15
Cite
Reviewer Report 05 Jul 2018
Stephen Baumgart, Department of Psychology and Brain Sciences, University of California, Santa Barbara, Santa Barbara, CA, USA 
Approved with Reservations
VIEWS 15
Addressing Major Criticisms

This is a controversial topic and careful consideration of objections is needed in a meta-analysis. Presentiment or Predictive Physiological Anticipation Studies (PAA) are typically criticized on these grounds (see, for example, Wagenmakers, Wetzels, Borsboom, ... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
Baumgart S. Reviewer Report For: Predictive physiological anticipation preceding seemingly unpredictable stimuli: An update of Mossbridge et al’s meta-analysis [version 1; peer review: 2 approved with reservations]. F1000Research 2018, 7:407 (https://doi.org/10.5256/f1000research.15593.r35197)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.
  • Author Response 17 Jul 2018
    Patrizio Tressoldi, Dipartimento di Psicologia Generale, Universita di Padova, Padova, Italy
    17 Jul 2018
    Author Response
    Thank you for your detailed and constructive comments.
    Here it follows our replies to your main comments.

    Though file-drawer effects are frequently cited as a serious concern, the results ... Continue reading
COMMENTS ON THIS REPORT
  • Author Response 17 Jul 2018
    Patrizio Tressoldi, Dipartimento di Psicologia Generale, Universita di Padova, Padova, Italy
    17 Jul 2018
    Author Response
    Thank you for your detailed and constructive comments.
    Here it follows our replies to your main comments.

    Though file-drawer effects are frequently cited as a serious concern, the results ... Continue reading
Views
18
Cite
Reviewer Report 10 Apr 2018
David Vernon, Canterbury Christ Church University (CCCU), Kent, UK 
Approved with Reservations
VIEWS 18
Introduction 
P.2, Line 21: Not sure I would agree with ‘body predicting moments ahead of time’ as this suggests understanding – try ‘reacting ahead of time’ or simply ‘physiological changes ahead…’
P.2: Para 2: the authors note that ... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
Vernon D. Reviewer Report For: Predictive physiological anticipation preceding seemingly unpredictable stimuli: An update of Mossbridge et al’s meta-analysis [version 1; peer review: 2 approved with reservations]. F1000Research 2018, 7:407 (https://doi.org/10.5256/f1000research.15593.r32577)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.
  • Author Response 17 Jul 2018
    Patrizio Tressoldi, Dipartimento di Psicologia Generale, Universita di Padova, Padova, Italy
    17 Jul 2018
    Author Response
    Thank you for your detailed and constructive comments.
    Here it follows our replies to your main comments.

    Introduction 
    P.2, Line 21: Not sure I would agree with ‘body predicting ... Continue reading
COMMENTS ON THIS REPORT
  • Author Response 17 Jul 2018
    Patrizio Tressoldi, Dipartimento di Psicologia Generale, Universita di Padova, Padova, Italy
    17 Jul 2018
    Author Response
    Thank you for your detailed and constructive comments.
    Here it follows our replies to your main comments.

    Introduction 
    P.2, Line 21: Not sure I would agree with ‘body predicting ... Continue reading

Comments on this article Comments (0)

Version 2
VERSION 2 PUBLISHED 28 Mar 2018
Comment
Alongside their report, reviewers assign a status to the article:
Approved - the paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations - A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approved - fundamental flaws in the paper seriously undermine the findings and conclusions
Sign In
If you've forgotten your password, please enter your email address below and we'll send you instructions on how to reset your password.

The email address should be the one you originally registered with F1000.

Email address not valid, please try again

You registered with F1000 via Google, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Google account password, please click here.

You registered with F1000 via Facebook, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Facebook account password, please click here.

Code not correct, please try again
Email us for further assistance.
Server error, please try again.