Tailored haemodynamic response function increases detection power of fMRI in awake dogs (Canis familiaris)

Functional magnetic resonance imaging (fMRI) of awake and unrestrained dogs (Canis familiaris) has been established as a novel opportunity for comparative neuroimaging, promising important insights into the evolutionary roots of human brain function and cognition. However, data processing and analysis pipelines are often derivatives of methodological standards developed for human neuroimaging, which may be problematic due to profound neurophysiological and anatomical differences between humans and dogs. Here, we explore whether dog fMRI studies would benefit from a tailored dog haemodynamic response function (HRF). In two independent experiments, dogs were presented with different visual stimuli. BOLD signal changes in the visual cortex during these experiments were used for (a) the identification and estimation of a tailored dog HRF, and (b) the independent validation of the resulting dog HRF estimate. Time course analyses revealed that the BOLD signal in the primary visual cortex peaked significantly earlier in dogs compared to humans, while being comparable in shape. Deriving a tailored dog HRF significantly improved the model fit in both experiments, compared to the canonical HRF used in human fMRI. Using the dog HRF yielded significantly increased activation during visual stimulation, extending from the occipital lobe to the caudal parietal cortex, the bilateral temporal cortex, into bilateral hippocampal and thalamic regions. In sum, our findings provide robust evidence for an earlier onset of the dog HRF in two visual stimulation paradigms, and suggest that using such an HRF will be important to increase fMRI detection power in canine neuroimaging. By providing the parameters of the tailored dog HRF and related code, we encourage and enable other researchers to validate whether our findings generalize to other sensory modalities and experimental paradigms.

The shape of the human HRF has been discussed extensively since its adoption in fMRI data analysis ( Aguirre, Zarahn, & D'Esposito, 1998 ;Boynton, Engel, Glover, & Heeger, 1996 ;Glover, 1999 ).Numerous factors causing HRF variability have been identified, e.g., developmental changes ( Arichi et al., 2012 ), and clinical conditions ( Ford, Johnson, Whitfield, Faustman, & Mathalon, 2005 ).A frequent approach to account for potential HRF variability within a participant sample (used twice for a dog sample, Jia et al., 2014Jia et al., , 2016 ) ) is to add temporal and/or dispersion derivatives (TDD) along with the HRF regressor when applying the GLM, used to calculate a so-called informed basis set ( Friston, Fletcher, et al., 1998 ;Friston, Josephs, Rees, & Turner, 1998 ;Henson, Price, Rugg, Turner, & Friston, 2002 ).Despite the increased flexibility in the model, the basis function depends on prior knowledge about the average shape of the underlying BOLD signal, which is currently not available in canine neuroscience research.
Previous studies using invasive recordings indeed demonstrated that the HRF varies across mammalian species.In comparison to humans, the HRF was shown to peak earlier in rats ( De Zwart et al., 2005 ;Lambers et al., 2020 ;Silva, Koretsky, & Duyn, 2007 ) and mice ( Chen et al., 2020 ), while the HRF in macaque monkeys appears similar ( Baumann et al., 2010 ;Goense & Logothetis, 2008 ;Koyama et al., 2004 ;Logothetis, Pauls, Augath, Trinath, & Oeltermann, 2001 ;Nakahara, Hayashi, Konishi, & Miyashita, 2002 ;Patel, Cohen, Baker, Snyder, & Corbetta, 2018 ).Deviations from the human HRF in terms of shape and temporal dynamics seem to decrease in species with closer common ancestry to humans ( Upham, Esselstyn, & Jetz, 2019 ) and with increasing absolute brain size (e.g., Roth & Dicke, 2005 for review).Considering the variations across species and potential differences in underlying neurophysiology, it seems plausible that the human HRF might deviate from the average BOLD signal in dogs.However, precise conclusions are currently not possible, as systematic investigations of the BOLD signal have not yet been performed in dogs.
Here, we aimed to close this gap and used non-invasive fMRI in awake dogs that were specifically trained for this approach.In two inde-pendent experiments, we used different visual stimulation experiments and a step-wise analysis approach to establish and validate our results, respectively.In the first experiment, dogs viewed a flickering checkerboard interspersed with a baseline condition (flickering checkerboard experiment, experiment 1).The experiment employed a block design, aimed at achieving a robust measure of the average BOLD signal in the primary visual cortex (V1).Based on the resulting V1 BOLD signal data, we identified and estimated a tailored dog HRF, compared its model fit to the one based on using the human HRF, and differences in wholebrain activation between the two HRFs.We also tested if adding time and dispersion derivatives to the human HRF could sufficiently account for potential deviations of the dog-from the human HRF.Data from a second experiment, which had employed an event-related visual stimulation design (face processing experiment, experiment 2), were then used to validate the results from the flickering checkerboard experiment.We opted for visual stimulation as the V1 can be easily located (see e.g., Langley & Grünbaum, 1890 ;Marquis, 1934 ;Uemura, 2015 ;Wing & Smith, 1942 ), thus ameliorating the problem of a common threedimensional coordinate system in canines.Finally, to encourage reproducibility, we openly share our data and provide a detailed description of the processing and analysis pipeline (see also for similar challenges on reproducibility in human fMRI: Carp, 2012bCarp, , 2012a ; ;Nichols et al., 2017 ;Poldrack et al., 2017Poldrack et al., , 2008 ) ). Together, our results provide a first investigation on whether the human HRF model appropriately fits the average BOLD signal in dogs and whether estimating a novel dog HRF can increase fMRI specificity and detection power.
All dogs passed an initial medical examination concerning eyesight and general health.The human caregivers gave written informed consent to their dogs' participation and did not receive any monetary compensation.The dogs were fully awake and unrestrained, and were able to exit the MR scanner at any time.To achieve this, they received extensive training prior to the MRI sessions in order to habituate them to the MRI environment (see Karl, Boch, Virányi, Lamm, & Huber, 2019 for a detailed description of the training procedure, and Berns & Cook, 2016 ;Strassberg, Waggoner, Deshpande, & Katz, 2019 for similar procedures).The study was approved by the institutional ethics and animal welfare commission in accordance with Good Scientific Practice (GSP) guidelines and national legislation at the University of Veterinary Medicine Vienna (ETK-06/06/2017), based on a pilot study conducted at the University of Vienna.The current study complies with the ARRIVE Guidelines ( Kilkenny, Browne, Cuthill, Emerson, & Altman, 2010 ).

Preparation
Together with the dog trainer, the dog entered the MR scanner room wearing earplugs and an additional head bandage to secure optimal earplug positioning and to enhance noise protection.The dog then ac-cessed the scanner bed via a custom-made ramp and positioned the head inside the coil, seated in sphinx position ( Fig. 1 A).The dog trainer then moved the dog into the scanner bore and visual tasks were presented using an MR-compatible computer screen placed at the end of the scanner bore (32 inch).Additionally, we used the camera of an eye-tracker (Eyelink 1000 Plus, SR Research, Ontario, Canada) to ensure that the dogs stayed awake, did not close their eyes during stimulus onsets, or whether they looked away from the visual stimulation (i.e., downward gaze during stimulus presentation; see supplementary material A, figure S1 for example of monitoring setup), and to monitor overall movement ( N = 5 dogs in experiment 2 were not monitored due to later implementation of the camera).The dog trainer remained in the MR-scanner room throughout the entire scan session but left the dog's visual field before task onset.The majority of the dogs first participated in the flickering checkerboard experiment followed by the face processing experiment in a subsequent MR-session ( Fig. 1 B).Data acquisition was aborted if the dog moved extensively (as observed using eye-tracking, see above) or if the dog exited the scanner bore during the task.Data collection was then repeated within the same or a subsequent session, depending on the dog's motivation.Following the scan session, we evaluated the realignment parameters and re-invited the dog to repeat the experiment in a subsequent session if head motion exceeded a threshold of 3 mm ( Fig. 1 C).On average, two scan sessions were necessary to complete the experiment below the motion threshold for both experiments; 12 out of 17 dogs and 9 out of 14 dogs succeeded in their first scan session for experiment 1 and experiment 2, respectively.After completing a run, the dog exited the MR scanner and received a food reward.

Flickering checkerboard experiment (experiment 1)
The task used in this experiment alternated between blocks of visual stimulation (flickering checkerboard covering the whole screen and green cross in the centre for 10 s) and a visual baseline with a green cross presented on a black screen for 10 s.The total task duration was 2.2 min, including six blocks of visual stimulation and 6 blocks of baseline in a fixed order, starting with the visual baseline condition (see Fig. 1 B).We chose this experiment for the dog HRF estimation based on the fact that a block design can be expected to be more robust and predictable, even if the human and dog HRFs and the actual BOLD signal time courses differed (see supplementary material A, 2 Flipping experiment 1 and 2 for a flipped study design, using the face processing experiment as HRF estimation data set, which resulted in similar results).

Face processing experiment (experiment 2)
The task for experiment 2 alternated between short events of visual stimulation (3 s clips of varying conditions, showing smooth transitions between two facial expressions from different human models, all on white background; 500 × 500 pixels) and a visual baseline with a black cross on a white screen jittered between 3-7 s (see Fig. 1 B).Within the scope of the present methodological study, we focused on visual responses compared to baseline, irrespective of the different conditions (results of this will be reported elsewhere).The total task comprised 60 trials of visual stimulation split in two runs.
Each run took 5 min with a short break outside the MR scanner if both runs were acquired in the same session.

MRI data acquisition
Data were collected using a 3T Siemens Skyra MR-system using a 15channel coil developed for structural imaging of the human knee.Functional imaging data for both tasks were obtained from 24 axial slices (interleaved acquisition; descending order, covering the whole brain) using a 2-fold multiband-accelerated echo planar imaging (EPI) sequence and a voxel size of 1.5 × 1.5 × 2 mm 3 (TR/TE = 1000/38 ms, field of view (FoV) = 144 × 144 × 58 mm 3 , flip angle = 61°, 20% gap).The task from experiment 1 (flickering checkerboard experiment) consisted Fig. 1.Overview of experimental approach to explore the average BOLD signal in dogs and estimate a tailored dog haemodynamic response function (HRF).(A) All dogs were trained to position their head in a 15-channel human knee coil and to stay motionless during data acquisition.(B) We acquired data in two different visual stimulation experiments.In (1), we extracted the average primary visual cortex (V1) BOLD signal using data from a flickering checkerboard experiment, and estimated a tailored dog HRF.We compared this dog HRF to the canonical human HRF, and to the human HRF with time and dispersion derivatives (TDD). in (2), we validated the results using a face processing experiment, whose data served as an independent test data set.(C) Structural scans were acquired in a session prior to functional data acquisition of the visual stimulation experiments; functional tasks were acquired in separate sessions.Movement parameters were assessed after successful completion of a task.If motion exceeded 3 mm, we repeated the task in additional sessions.(D) We created individual tailor-made brain masks using itk-SNAP ( Yushkevich et al., 2006 ) to skull-strip the structural images and consequently improve co-registration and normalization of the canine neuroimaging data.For the first level analysis (step 10, First level (GLMs)) functional data are masked using anatomical boundaries (normalized binary mask).Illustrative structural and functional images as well as binary mask were derived from one dog in the sample; tissue probability maps (TPMs) were from the canine breed-averaged atlas ( Nitzsche et al., 2019 ).Numbers, in bold, describe the sequence of processing steps.Est., estimate; Res., resliced, GLM, general linear model. of a single run comprising 134 scans, and the task employed in experiment 2 (face processing experiment) comprised two runs of 270 scans each.The dogs had multiple attempts to complete the task in case of excessive head motion (see 2.2.experimental design).For one dog, we truncated the second run to 190 scans due to excessive motion (change of head position) and the dog's unavailability for repeating the session.The structural image was obtained using a voxel size of 0.7 mm isotropic (TR/TE = 2100/3.13ms, FoV = 230 × 230 × 165 mm 3 ) and was acquired in a prior scan session, separated from the functional imaging sessions.

Data processing and statistical analysis 2.4.1. MRI data preprocessing
All imaging data was analysed using SPM12 ( https://www.fil.ion.ucl.ac.uk/spm/software/spm12/ ) and Matlab 2014b (MathWorks; see Fig. 2 for an overview of the workflow).After slice timing correction (referenced to the middle slice, Sladky et al., 2011 ) and image realignment, the functional images were manually reoriented to match the orientation of the canine breed-averaged template (Nitzsche et al., 2017) with the rostral commissure as a visual reference using the SPM module "Reorient images / Set origin ".We then manually skull-stripped the structural image using an individual binary brain mask for each dog, created using itk-SNAP ( Yushkevich et al., 2006 ).Based on preliminary analyses, skull-stripping canine imaging data proved to be essential for successful automatic co-registration.This way, the co-registration algorithm successfully detects brain borders, not incorrectly relying on large muscles that surround the dog brain but have similar image intensity (see Fig. 1 D).The structural image, the individual binary brain mask, and the functional imaging data were then co-registered to the mean image of each run.Next, the structural image was segmented ( "Old Segmentation " module of SPM12) into grey matter, white matter, and cerebrospinal fluid, using the tissue probability maps provided by the canine breed-averaged template ( Nitzsche et al., 2019 ).We then normalized (using the "Old Normalization " module of SPM12) the functional and structural imaging data, along with the individual binary brain mask.Lastly, functional images were resliced (1.5 mm isotropic) and smoothed using a 3 mm Gaussian kernel (full-width-at-half-maximum, FWHM).

Template normalization
We attempted to provide a unified coordinate system by combining two available templates, (1) based on a canine breed-average ( Nitzsche et al., 2019 ) combined with (2) the normalized labels from another canine template based on a single male Golden Retriever ( Czeibert, Andics, Petneházy, & Kubinyi, 2019 ).First, we segmented ( "Old Segmentation ") the structural template ( Czeibert et al., 2019 ) using the tissue probability maps provided by the breed-averaged template ( Nitzsche et al., 2019 ).Then, we normalized ( "Old Normalization ") both the structural template and the NIfTI-file containing the atlas labels.

fMRI data analysis
We now provide an overview of the analysis approach followed by more details on each analysis step in the following section (see also Fig. 3 ).For the exploratory investigation of the average BOLD signal and estimation of the tailored dog HRF, we first analysed activation changes in V1 during experiment 1 (contrast flickering checkerboard > visual baseline) in the following steps: (1) we extracted the average V1 time course of the BOLD signal employing a finite impulse response (FIR) model (exploration and estimation analysis step 1, extraction V1 BOLD signal); (2) we estimated a tailored dog HRF based on the FIR data above (exploration and estimation analysis step 2, dog HRF estimation); (3) we then compared the human canonical HRF (i.e., the default HRF parameters provided by SPM12) with the dog HRF using model fit analysis and Wilcoxon signed ranks tests (exploration and estimation analysis step 3, model fit comparison).Then, to expand comparisons to the whole-brain, (4) we performed first-level analysis using the human HRF, the human HRF with time and dispersion derivatives and the tailored dog HRF (exploration and estimation analysis step 4, first-level GLMs) and ( 5) analysed neuroimaging data on a group-level along with pairedsample t -tests (exploration and estimation analysis steps 5, group-level activation comparisons).
Next, to validate the results from experiment 1, which revealed an earlier peak of the V1 BOLD signal in dogs, we cross-validated them by analysing V1 activation changes during the face processing experiment (contrast faces > visual baseline), using a similar but modified approach:  1) to extract the average V1 BOLD signal in dogs and visually compare it to the canonical human HRF model (i.e., the default HRF parameters provided by SPM12) using a finite impulse response (FIR) model, (2) to estimate a tailored dog HRF based on the empirical data, and (3) to compare model fits of the human and dog HRF in the visual cortex.On the whole-brain level, (4) we then performed first-level analyses using the human HRF, the human HRF along with time and dispersion derivatives (TDD) and the tailored dog HRF in order to (5) perform whole-brain group comparisons using one-sample and paired-sample t -tests across HRF models.(B) Results from (A) were then validated using the data from the face processing experiment as an independent validation data set.All analysis steps were identical to above, except for the dog HRF estimation.GLM, general linear model; BOLD, Blood Oxygenation Level Dependent (1) we extracted the average time course of the V1 BOLD signal during the face processing experiment using a FIR model (validation step 1, extraction V1 BOLD signal); (2) we compared the HRF models based on their model fit and using Wilcoxon signed ranks tests (validation step 2, model comparison); (3) we performed univariate activation analysis using the human HRF, the human HRF along with time and dispersion derivatives (TDD), and the dog HRF (validation step 3, first-level GLMs); lastly, (4) we performed group activation analyses along with pairedsample t -tests (validation step 4, group-level activation comparisons).

Exploration and estimation analysis: Flickering checkerboard experiment (experiment 1)
Step 1: Extraction average V1 BOLD signal.We used a finite impulse response (FIR) model to measure the average V1 time course of the BOLD signal in dogs.This flexible approach makes minimal assumptions about the shape of the BOLD signal and thus results in independent response estimates for a predetermined number of time bins (in the present case, one time bin per TR).We estimated FIRs covering the visual stimulation blocks (starting at stimulus onset (0 s) until 10 s after stimulus offset), yielding a duration of 20 s.The 20 s where then divided in 20 time bins (TR = 1 s), each modelled with a separate regressor using an impulse response function.We then extract the average V1 time course, based on V1 coordinates obtained from the group-based comparison using the human HRF (exploratory and estimation analysis step 5; see also Table 1 , section "human HRF "), using (a) a 4 mm sphere placed around the local maximum of the cluster that covered the occipital lobe ( Fig. 4 A) and (b) expanding over V1 as determined by our atlas labels ( Czeibert et al., 2019 ;Nitzsche et al., 2019 ).Finally, we extracted each dog's average BOLD time series and calculated the time course of activation induced by the visual stimulation block across all dogs.
Step 2: Estimation of the dog HRF.Based on the results from step 1, which upon visual inspection revealed the need for a tailored dog HRF with earlier onset, we estimated a new parametrization for SPM's canonical HRF, yielding a tailored dog HRF model.The spm_hrf function uses seven optional parameters to specify the shape of the HRF: the delay of the response (relative to onset, p 1 = 6 s), the delay of the undershoot (relative to onset, p 2 = 16 s), the dispersion of the response ( p 3 = 1), the dispersion of undershoot ( p 4 = 1), the ratio of the response to the undershoot ( p 5 = 6), the onset ( p 6 = 0 s), and the length of the kernel ( p 7 = 32 s).We used MATLAB's fminsearch function, a multidimensional unconstrained nonlinear minimization method, to optimize the model fit of the regression analysis ( R 2 -statistics of MATLAB's regress function) by varying the values of p 1 , p 2 , p 5 , p 6 .The assumed plausible ranges for the haemodynamic parameters were: and the regression analysis was identical to a standard SPM first-level analysis (see above, step 1).We chose not to deviate from the default-values for response ( p 3 ) or undershoot dispersion ( p 4 ), or the overall kernel length ( p 7 ) to prevent overfitting.
Step 3: Model fit comparison.We then calculated the individual single-subject R 2 -statistics of each GLM with the different HRF parameters and compared the model fit to the extracted V1 BOLD signal between human and dog HRF using a Wilcoxon signed ranks test.
Step 4: Human HRF.Using the GLM approach implemented in SPM12, we estimated contrast images for each dog that reflected taskrelated activation (contrast checkerboard > baseline).The first-level design matrix of each dog contained a task regressor modelling visual stimulation, time-locked to the onset of each block (duration 10 s) and convolved with the human (canonical) HRF.The six realignment parameters along with regressors modelling framewise displacement (see above) were added to the design matrix to account for head motion.Normalized, and individually created binary masks (see above and Figure 2 ) were used as explicit masks and a high-pass filter with a cut-off at 128 s was applied.
Human HRF + TDD.Next, to account for variability ( Friston, Fletcher, et al., 1998 ;Friston, Josephs, et al., 1998 ;Henson et al., 2002 ), we added temporal and dispersion derivatives (TDD) to the human HRF.The visual stimulation regressor was thus convolved with the human HRF along with its TDD.This resulted in three regression parameter estimates consisting of: (1) the human canonical HRF ( β1 ) , (2) the time derivative ( β2 ) , and (3) the dispersion derivative ( β3 ) .We then combined all three regressors to form one "derivative boost (H) "-regressor per dog ( Calhoun, Stevens, Pearlson, & Kiehl, 2004 ;Lindquist et al., 2009 ) 3 .Dog HRF.Next, we set up a first-level model (same settings as previously) including the data that was now estimated and convolved using the estimated dog HRF (step 3, human HRF).
Step 5: Group-level activation comparison.To test for activation differences during visual stimulation on a group-level, we implemented one sample t -tests for each HRF model (steps 1, 2, 5; contrasting flickering checkerboard > baseline; H-regressor for TDD model), as well as paired-sample t -tests (checkerboard > baseline).Unless stated otherwise, significance was determined using cluster-level inference with a cluster-defining threshold p < 0.001 and a cluster probability of p < 0.05 family-wise error (FWE) corrected for multiple comparisons.Cluster extent was calculated using the SPM extension "CorrClusTh.m" (by Thomas Nichols, University of Warwick, United Kingdom, and Marko Wilke, University of Tübingen, Germany; https://warwick.ac.uk/fac/sci/statistics/staff/academic-research/ nichols/scripts/spm/ ).

Validation analysis: Face processing experiment (experiment 2)
Independent data obtained during the face processing experiment (experiment 2) were then used to validate the exploratory results and to compare all three HRF models.
Step 1: Extraction average V1 BOLD signal.Similar to above (exploration and estimation analysis, step 1) we used a finite impulse response (FIR) model to extract the individual BOLD signal time courses, but defined 10 time bins starting at the stimulus onset (0 s) until 7 s after stimulus offset.Each time bin had a duration of 1 s ( = length of TR) and was modelled with a separate regressor per time bin using impulse response functions.We then placed a 4 mm sphere around the local maxima of the cluster encompassing the V1, and used the coordinates emerging from the human HRF + TDD model (validation analysis, step 5; Table 2 section "human HRF + TDD ") since the human HRF did not survive the significance threshold ( Fig. 4 B).
Step 2: Model fit comparison.This step was almost identical to above (exploration and estimation analysis, step 3) but was performed based on the FIR data from experiment 2 (validation analysis, step 1).
Step 3: Human HRF.Analysis was identical to above (exploration and estimation analysis, step 4 human HRF), but visual stimulation was modelled with one task regressor time locked to the event onset (duration of 3 s), contrasted against visual baseline (contrast faces > baseline).
Human HRF + TDD.Analysis was identical to above (exploration and estimation analysis, step 4 human HRF + TDD) using the task regressor from experiment 2 (validation analysis, step 3 human HRF) but resulted in two informed basis sets as this task contained two separate runs.We first calculated the mean of each parameter estimate across both runs ) and then, as above, combined all three averaged regressors to one "derivative boost (H) "-regressor per dog.Dog HRF.We defined the same first-level model as described above (validation analysis, step 3 human HRF) but the task regressor was convolved with the newly estimated dog HRF.
Step 4: Group-level activation comparison.This step was performed based on the first-level results from experiment 2 but otherwise identical to above (exploration and estimation analysis, step 5).

Data and code availability statement
Unthresholded statistical maps from the exploratory and estimation analysis, the Matlab-based code to calculate the HRF model fits, the FIR data for both experiments, and a spm_my_defaults.m-script containing the dog HRF parameters have been added as supplementary material.

Exploration and estimation analysis: Flickering checkerboard experiment (experiment 1)
FIR model and dog HRF estimation.To investigate the time course of the BOLD response in dogs, we used a model-free analysis (FIR model, exploration and estimation analysis, step 1).Results suggested a temporal difference between the standard (canonical) human HRF and the average response in our canine sample.Visual inspection of the results revealed an earlier peak after visual stimulation onset compared to convolution using a human HRF and, consequently, an earlier decline and return to baseline ( Fig. 4 A).Therefore, the estimation based on the FIR data (exploration and estimation analysis, step 2) resulted in the following parameter changes to the (canonical) human HRF: a shorter response delay ( p 1 = 4.3 s), a delay of the undershoot ( p 2 = 6.6 s), as well as a lower ratio of the response to the undershoot ( p 5 = 3).This newly estimated dog HRF peaked around 2-3 s earlier as compared to the human HRF ( Fig. 4 A).

Determining the HRF model fits
R 2 -statistics of both GLMs calculated individually (main analysis, step 6) revealed a better model fit of the average time course of activation when using the dog HRF, with a mean R 2 of 0.64 ( SD = 0.21), increasing the fit almost two times in comparison to the model using the Effects were tested for significance with a cluster-defining threshold of p < 0.001 and a cluster probability of p < 0.05 FWE-corrected for multiple comparisons.Critical cluster sizes ( k ) are reported along with the results for each one-sample t -test, each haemodynamic response (HRF) model, and for the paired-sample t -tests between all HRF models.The first local maximum within each cluster is reported; coordinates represent the location of peak voxels and refer to the canine breed-averaged template ( Nitzsche et al., 2019 ), the template along with another single dog template ( Czeibert et al., 2019 ) served to determine anatomical nomenclature for all tables.TDD, time and dispersion derivatives; O, occipital lobe; T, temporal lobe; L, left; R, right.Effects were tested for significance with a cluster-defining threshold of p < 0.001 and a cluster probability of p < 0.05 FWE-corrected for multiple comparisons.Critical cluster sizes ( k ) are reported along with the results for each one sample t -test per haemodynamic response function (HRF) model.For the one-sample t -test based on the human HRF GLM, no cluster survived the threshold.None of the paired-sample t -tests across HRF models survived the critical cluster threshold ( k ), therefore the significance level was lowered to p < 0.005 with an arbitrary cluster threshold ( k * ) of 10 voxels.One paired-sample t -test (human HRF vs. human HRF and time and dispersion derivatives (TDD)) did not survive the lowered threshold as well as the contrasts human > dog HRF, human HRF + TDD > dog HRF.The first local maximum within each cluster is reported; coordinates represent the location of peak voxels and refer to the canine breedaveraged template ( Nitzsche et al., 2019 ), the template along with another single dog template ( Czeibert et al., 2019 ) served to determine anatomical nomenclature for all tables.O, occipital lobe; T, temporal lobe; L, left; R, right.

Visual activation: Human HRF / human HRF + TDD
Expanding to whole-brain comparisons (exploration and estimation analysis, step 5), we performed standard whole-brain GLM analyses similar to other canine neuroimaging papers (e.g., Andics et al., 2016 ;Cuaya et al., 2016 ) and localized visual processing areas by convolving fMRI data with the human HRF (exploration and estimation analysis, step 3 human HRF).Results revealed increased activation within the occipital lobe (V1) and within the left hippocampal area ( Table 1 , section "human HRF ").When accounting for HRF variability (exploration and estimation analysis, step 3, human HRF + TDD), we found similar activation within V1 during visual stimulation (but only about half the size compared to the human HRF) as well as within the right dorsal temporal lobe ( Table 1 , section "human HRF + TDD).Additionally, V1 clusters stemming from both analysis types expanded from the occipital lobe to portions of the parietal and right temporal lobe ( Fig. 5 A).Thus, analyses based on the standard human HRF with and without accounting for its variability yielded comparable activation increases in V1 during visual stimulation.

Visual activation: Dog HRF
We now report in more detail the brain areas revealing significant activation on a group-level using the tailored dog HRF, since it significantly improved the model fit in the V1 compared to the human HRF (exploration and estimation analysis, steps 2-3).We observed five clusters with stronger activation during visual stimulation compared to baseline ( Table 1 , section "dog HRF ", Fig. 5 ), which is more than double the amount of significant clusters, as well as cluster sizes, compared to the remaining models (main analysis, steps 1, 2; Table 1 ).The largest cluster expanded from the V1 to bilateral parietal and temporal lobe regions, followed by smaller clusters in the right temporal lobe (see Table 1 and Fig. 6 for details).

Activation differences during visual stimulation across HRF models
In order to test for whole-brain differences in activation, we compared the human HRF, human HRF + TDD and dog HRF GLMs using paired-sample t -tests (contrast checkerboard > visual baseline; exploration and estimation analysis, step 5).Results revealed significant clusters for all models.However, the analysis using the dog HRF was the only one that resulted in significant differences in activation both in the V1 and bilateral temporal regions (dog HRF > human HRF); the human HRF + TDD increased activation only in a caudal V1 region (human HRF + TDD > human HRF; human HRF + TDD > dog HRF).In sum, the human HRF revealed to be the least sensitive model (see Fig. 5 B, Table 1 for details).

Validation: Face processing experiment (experiment 2)
Next, we validated our novel results in an independent data set and compared all three HRF models.

FIR model and comparison of HRF model fits
Visual inspection of the average activation time course based on the FIR model (validation analysis, step 4) confirmed the results of the exploratory and estimation analysis, as it again revealed an earlier BOLD signal peak (see Fig. 4 B).In line with the exploratory results, comparing the average HRF model fits (i.e., R 2 -statistics) for both runs separately (validation analysis, step 5) revealed that the dog HRF resulted in an R 2 eight times higher for the first run (human HRF: mean  2  1 = 0.06, SD = 0.11; dog HRF: mean  2  1 = 0.5, SD = 0.31) and by almost three times for the second run (human HRF: mean  2  2 = 0.15, SD = 0.22; dog HRF: mean  2  2 = 0.44, SD = 0.31).Again, the Wilcoxon signed ranks tests indicated that the dog HRF model fit was significantly higher than the human HRF in both runs (Run 1: z = 100, p = 0.001; Run 2: z = 67, p = 0.012), confirming the advantage of using the tailored dog HRF in a data set independent of the dog HRF estimation.

Visual activation during visual stimulation across HRF models
In line with the results from the exploratory and estimation analysis, modelling the dog HRF resulted in the highest number of activated Fig. 4. Visual comparison reveals an earlier peak of the BOLD signal in dogs as when modelled using the canonical human haemodynamic response function (HRF) for both independent data sets leading to the estimation of a tailored dog HRF.After calculating the finite impulse response (FIR) models, we extracted individual response estimates from the maximal response in primary visual cortex (V1) using coordinates from (A) the standard human HRF for the flickering checkerboard experiment (exploration and estimation analysis, step 5; x = -1, y = -29, z = 16, 4 mm) and (B) the standard human HRF along with time and dispersion derivatives for the face processing experiment (validation analysis, step 4; x = -1, y = -29, z = 19, 4 mm).Based on the extracted data, we calculated the averaged BOLD signal time course for the visual stimulation across clusters with cluster sizes increasing twelve times in comparison to the model including the human HRF + TDD.Furthermore, the dog HRF was the only model that detected activation beyond the V1 in bilateral temporal regions, while none of these withstood the cluster threshold correction when modelling the human HRF (see Table 2 , Fig. 7 A for details; validation analysis, step 5).Performing paired-sample t -tests between dog HRF, human HRF and human HRF + TDD (validation analysis, step 5) resulted in no significant differences with the initial strict threshold, but lowering the threshold to p = 0.005 uncorrected indicated that using the dog HRF improved the sensitivity to detect visual processing areas (see Table 2 , Fig. 7 B for details), thus confirming the exploratory results.

Discussion
The aim of this study was to explore whether the typically used human haemodynamic response function (HRF) fits the average BOLD signal in dogs and whether detection power for canine neuroimaging data can be improved using a tailored dog HRF.Our results indicate that the human HRF does not fit the average BOLD signal in dogs.We provide initial evidence that the average time course of the V1 BOLD signal in dogs peaks 2-3 s earlier than the human HRF and that the model fit for the primary visual cortex (V1) can be significantly improved using a tailored dog HRF.Expanding to whole-brain activation, the dog HRF again resulted in increased detection power for the dog HRF.
We used two independent visual experiments serving as exploration and estimation analysis (flickering checkerboard experiment, experiment 1) and independent validation sets (face processing experiment, experiment 2).We estimated a tailored dog HRF based on the empirical data from experiment 1, since V1 BOLD signal indicated an earlier peak compared to the human HRF.Following this, we were able to confirm the earlier peak when investigating the V1 BOLD signal in the independent experiment 2. Further, the model fit (i.e., R 2 -statistics) for the V1 significantly improved (and almost doubled) in experiment 1 and were between eight (run 1) and almost three (run 2) times higher in experiment 2 when comparing to the human HRF.Expanding to whole-brain comparisons, our results provide evidence that the human HRF, compared to the tailored dog HRF, resulted in significantly less activation being detected.Fourth, adding time and dispersion derivatives (TDD) led to significantly increased activation in both experiments, but only within occipital areas.For experiment 1, the human HRF + TDD even led to increased V1 signal compared to the dog HRF.Overall, however, the human HRF + TDD was less sensitive in detecting secondary visual areas resulting in fewer significant clusters, while the dog HRF detected both primary and secondary visual areas during both experiments.These are important findings when considering the small sample sizes in most canine neuroimaging studies.In contrast to human studies, it is more difficult to increase power by increasing the sample size, primarily due to limited availability of canine participants and extensive dog training prior to MR-scanning.Thus, increasing the model fit of the HRF to the average BOLD signal time course is an important alternative tool to further increase the power and therefore increase the reproducibility of future studies.
Our findings are consistent with research in rodents, which suggested that using the human HRF degrades the model fit and, thus, the overall detection performance ( Lambers et al., 2020 ).As in our sample, trials and dogs for both (A) the flickering checkerboard experiment and (B) the face processing experiment (both runs separately).The dog HRF was estimated based on the average BOLD signal time course from the flickering checkerboard experiment (exploration and estimation analysis, step 2), while the face processing experiment served as an independent test data set to validate the results derived from the exploration and estimation analysis.The tailored dog HRF and the human HRF are plotted in addition to the extracted the BOLD signal time course to display the fit of the HRF models for both experiments.For illustration purposes, the dog and human HRF's were scaled by the parameter estimates (arbitrary units, a.u.) from the respective GLMs.SEM, standard error of the mean.Fig. 5. Flickering checkerboard experiment: Comparison of brain activation across haemodynamic response functions (HRF) illustrates increased detection performance using a tailored dog HRF in both primary and higher order visual processing areas (exploratory and estimation analysis).Results are displayed at p < 0.05, FWE-corrected at cluster-level, and using a cluster-defining threshold of p < .001(see Table 1 ), overlaid onto the mean structural image.Coordinates refer to the canine breed-averaged atlas ( Nitzsche et al., 2019 ).The first axial plane (A, first row, left) shows the anatomical locations caudal (C), rostral (R), and left hemisphere (LH); all axial planes displayed have the same orientation.The sagittal plane displays the cut coordinates and the anatomical locations dorsal (D), ventral (V).(A) Group-based activation for visual stimulation > baseline (one-sample t -tests) indicate that the analysis using the dog HRF shows the highest sensitivity for the canine neuroimaging data, with the analysis using the human HRF resulting in smaller activation clusters, and the analysis using the human HRF combined with time and dispersion derivatives resulting in even smaller activation clusters.(B) Comparisons of visual stimulation > visual baseline contrasts between all three HRF models (paired-sample t -tests) resulted in similar significant activation changes in the occipital lobe for the human HRF and time and dispersion (TDD) model in contrast to both the human and dog HRF).Comparing the human HRF and dog HRF revealed stronger activation in the primary visual cortex and temporal regions for the dog HRF compared to the dog HRF and activation in the insular cortex for the reverse contrast (not depicted, see Table 1 for details).
Lambers and colleagues (2020) observed an earlier peak of the average BOLD signal in rats, proposing differences in brain and vessel size, smaller distances within the brain or a higher capillary and venous flow velocity as potential reasons for the observed patterns (see also De Zwart et al., 2005 ;Silva et al., 2007 ).Absolute brain sizes cannot sufficiently explain why the human HRF fits the average BOLD signal in dogs.Although dog brains have a smaller absolute size than human and, on average, macaque brains (e.g., DeFelipe, 2011 ;Yáñez et al., 2005 ), the dog breeds in our sample seem to have a similar size as rhesus macaques ( Horschler et al., 2019 ).However, relative size (brain size/body weight) could potentially explain our findings, since the dog brains in our sample (just as rodent brains) seem to have a smaller relative brain size than hu-mans and rhesus macaques (e.g.Baumann et al., 2010 ;Logothetis et al., 2001 for average body weight in macaques; Roth & Dicke, 2005 for review).Although evolutionary relationship also seems to correlate with the human HRF across species, underlying neurovascular mechanisms remain somewhat unclear.Additionally, skull shapes and sizes also vary within dog species (i.e., across different breeds), resulting in substantial variance in underlying neuroanatomy in dogs ( Hecht et al., 2019 ;Horschler et al., 2019 ;Schoenebeck & Ostrander, 2013 ).Since our sample was rather homogenous (70% border collies; all mesocephalic skull shapes) and small, we did not have enough variance to test for potential differences between breeds, skull shapes or sizes.Further, the human HRF parameters provided by SPM have been estimated based on Fig. 6.Increasing the detection power by using the tailored dog haemodynamic response function (HRF) in the flickering checkerboard experiment allows detailed description of primary and higher-order visual processing areas.(A) Visual stimulation against baseline elicited activation in a large region of the occipital lobe peaking at the rostral occipital lobe expanding to the caudal parietal lobe and bilateral dorsal portions of the temporal lobe.In addition, activation in bilateral hippocampal areas increased in response to visual stimulation compared to baseline.Results are displayed at p < 0.05, FWE-corrected at cluster-level, and using a cluster-defining threshold of p < .001(see Table 1 , section "dog HRF "), plotted onto the mean structural image.Atlas maps, coordinates and the anatomical nomenclature refer to the canine breed-averaged atlas ( Nitzsche et al., 2019 ) and additional normalized labels from a single-dog based template ( Czeibert et al., 2019 ).Images are accompanied with anatomical locations caudal (C), rostral (R), dorsal (D), ventral (V), left hemisphere (LH) and right hemisphere (RH).(B) For easier interpretation of the anatomical structures activated, blue-shaded outlines of anatomical regions are displayed together with contours of activated clusters shown in Panel A.
Although it has never been tested, 3 T or higher field MR could potentially influence BOLD signal measurements (i.e., increased sensitivity to microvasculature).Reviewing the published literature, dog fMRI labs working with unrestrained and fully awake dogs have so far used a 3 T MR scanner.Thus, in terms of magnetic field strength, our dog HRF estimate should be comparable to other canine neuroimaging data.Additionally, differences in heart rate (i.e., Chang, Cunningham, & Glover, 2009 ), breathing rate (i.e., Birn, Murphy, Handwerker, & Bandettini, 2009 ), as well as the distance to draining veins (i.e., Bianciardi, Fukunaga, van Gelderen, de Zwart, & Duyn, 2011 ;Krings, Erberich, Roessler, Reul, & Thron, 1999 ;Turner, 2002 ) can also modulate the BOLD signal time course.In line with the observed earlier peak of the BOLD signal in dogs, Manzo, Ootaki, Ootaki, Kamohara, & Fukamachi (2009) report a higher heart rate in dogs (with a similar body weight as in our sample) compared to humans.Unfortunately, due to the lack of physiological measurements, we cannot test the influence of heart and breathing rate, or the distance to draining veins, in the present sample.Additionally, besides body weight, heart rate measurements were also shown to correlate with factors such as age and breed ( Hezzell, Humm, Dennis, Agee, & Boswood, 2013 ).Here, we estimated the dog HRF based on dogs ranging between 4-11 years ( Mdn = 8) covering a wide range of ages.Taken together, the average BOLD signal might deviate from the tailored dog HRF across breeds and at different body weight.This could be accounted for by adding time and dispersion derivatives to the dog HRF in future studies.
Our results do not confirm earlier reports of a similar time course of the average BOLD signal to the one in humans ( Berns et al., 2012 ).Unlike Berns et al. (2012) , our results suggest that the human HRF does not fit the average time course of the BOLD signal in dogs optimally.However, Berns et al. (2012) studied the subcortical caudate nucleus, while we focused on the cortical BOLD signal in dogs, extracting data from V1.Previous research in other species, i.e. humans showed that the average BOLD signal time course differed between cortical and sub-cortical regions ( Handwerker et al., 2004 ;Lewis, Setsompop, Rosen, & Polimeni, 2018 ).Thus, our findings do not necessarily contradict the results from Berns et al. (2012) but might be related to the different areas analysed, as well as their neural and vascular characteristics.
While it is possible to accurately estimate HRF parameters from block designs ( Shan et al., 2014 ), studies investigating the shape of the BOLD signal time course typically employ an event-related design (i.e., Friston, Jezzard, & Turner, 1994 ;Handwerker, Ollinger, & D'Esposito, 2004 ;Lindquist, Meng Loh, Atlas, & Wager, 2009 ).A disadvantage of experiment 1 is the fixed on-off cycle (10 s): Dogs might have anticipated the next stimulus onset, and we might have missed a possible BOLD signal undershoot that might have continued into the next block (i.e., longer than the 10 s baseline) which altogether could have affected our observed BOLD signal shape.Nevertheless, we chose experiment 1 as for estimating the HRF because of the robustness of the design itself, and because of the salient visual stimulation (flickering checkerboard).Such stimulation is typically used to elicit solid activation in our V1 target region (i.e., Moradi et al., 2003 ;Sladky et al., 2011 for examples in humans).Thus this allowed us to achieve increased detection power as well as a reliable dog HRF estimate.We then validated our results by using the event-related face processing experiment (experiment 2; jittered baseline).This confirmed our initial results, resulting again in a better model fit for the dog HRF compared to the human HRF.In addition, we estimated HRF parameters based on the face processing experiment resulting in similar results and numerically almost identical model fits.Although flipping the exploration and validation experiments lead to comparable results, future research should employ block and event-related designs with a jittered resting state period, combining the strengths of both experiments (i.e., short events of flickering checkerboards with jittered baseline).
Exploring their visual environment, humans and non-human animals perform rapid eye movements (saccades) ranging from small to larger movements depending on the visual stimulus.These saccades could have influenced the observed V1 BOLD signal time course.First, if the dogs Fig. 7. Face processing experiment: Comparison of brain activation in an independent data set confirms increased detection performance using a tailored dog haemodynamic response function (HRF) compared to other HRF models (validation analysis).For display purposes results are displayed at p < .005(for results at p < 0.05, FWE-corrected at cluster-level, and a cluster-defining threshold of p < .001see Table 2 ) on the mean structural image.Coordinates refer to the canine breed-averaged atlas ( Nitzsche et al., 2019 ).The first axial plane (A, first row, left) shows the anatomical locations caudal (C), rostral (R) and left hemisphere (LH); all axial planes displayed have the same orientation.The sagittal plane displays the cut coordinates and the anatomical locations dorsal (D), ventral (V).(A) Group-based activation for visual stimulation > baseline (one sample t -tests) indicate that the human HRF results in almost no activation, the human HRF combined with time and dispersion derivatives (TDD) results in bigger activation clusters and again that the dog HRF shows the highest sensitivity for the canine neuroimaging data.(B) Group comparisons of visual stimulation > visual baseline contrasts between all three HRF models.Group-based activation (paired-sample t -tests) resulted in trends of activation changes in temporal regions for the dog HRF in comparison to both the human HRF and human HRF + TDD model (see Table 2 for detailed results).
gazed away from the visual stimulus, the V1 BOLD signal would have decreased.We did not record the dogs' eye gaze since the dogs were not trained to perform the eye tracker calibration inside the MR scanner (for information on the extensive training procedure for a setting outside the scanner please see Karl, Boch, Virányi, Lamm, & Huber, 2019 ).However, we monitored their eye movements via a camera from an eye tracker throughout the data collection to ensure that the dogs were awake and generally attending to the task.If the dog always looked away from the MR screen (looking to the side, top or bottom for a long amount of time) we would have stopped the data collection.In addition, we chose experiment 1 as exploration and estimation data set, because the flickering checkerboard expanded over the entire MR screen and the MR screen itself covered the scanner bore.This made it almost impossible to look away from the visual stimulation.However, frequent saccades across a visual stimulus can also lead to a decreased V1 signal (saccadic suppression; i.e., Sylvester, Haynes, & Rees, 2005or Wurtz, 2008 for review) or a pre-saccadic activity increase (~100 ms, so far detected with single cell recordings in monkeys, i.e., Supèr, Van Der Togt, Spekreijse, & Lamme, 2004).Especially with the flickering checkerboard experi-ment, the dogs typically did not make frequent saccades since there is no moving object or agent present that they could follow with their gaze.The face processing experiment contained dynamic stimuli (i.e., morphed faces), but we positioned the visual stimuli in the centre of the MR screen and the dogs' eye field with a size of 500 × 500 pixels to ensure that the dogs can observe the entire stimulus without performing frequent eye movements.Future canine neuroimaging studies using visual stimuli could also explore the possibility to incorporate an eye tracking protocol to record eye gaze data to further strengthen their results.
Overall, our findings provide first evidence that the human HRF in the visual cortex does not optimally fit the HRF observed in dogs.Despite being based on two independent experiments allowing for crossvalidation, this evidence should be treated as preliminary, awaiting independent validation in other samples, experimental paradigms, and brain regions.We hope that our approach will encourage future research to test the reproducibility and generalizability of our findings, and to explore whether this could help to increase model fit and detection power in their own canine fMRI datasets.For this reason, we adopted the established and recommended ( Carp, 2012b ;Nichols et al., 2017 ;Poldrack et al., 2008 ) standards from human neuroimaging analyses, provided a detailed description of our workflow and parameters, and made our imaging data and code openly available.Using a simple but salient sensory stimulation experiment also allowed quality assessment of our developed processing pipeline and helped us validate future changes in our pipeline, preventing potentially biased decisions.Additionally, a short (visual) localizer experiment can be used for dog training and getting dogs accustomed to the experimental setup.
Transparent reporting also allows us to build on previous research and facilitates the comparison of results.Based on previous research (e.g., Aguirre et al., 2007 ;Langley & Grünbaum, 1890 ;Marquis, 1934 ;Uemura, 2015 ;Willis, Quinn, McDonell, Gati, Parent, et al., 2001 ;Willis, Quinn, McDonell, Gati, Partlow, et al., 2001 ;Wing & Smith, 1942 ) we are certain about the location of the V1, but less is known about other higher-order visual association areas.Similar to the human and rhesus macaque visual system (e.g., Orban, Van Essen, & Vanduffel, 2004 ;Tootell, Tsao, & Vanduffel, 2003 for comparative reviews), we found activation within the dorsal visual stream, extending from the occipital lobe to the caudal parietal lobe and the ventral stream, and including bilateral regions in the temporal lobes, bilateral hippocampus and caudal thalamus.We did not find significant activation in the lateral geniculate body (LGB); (1) regarding the small size of the region, detection might require smaller voxel sizes or (2) differences in individual anatomy might have led to anatomical imprecision, atlases based on larger sample size ( Nitzsche et al. 2019 : based on N = 16 dogs) could help disentangle this question.Unfortunately, there is still no agreement on a shared template space; publicly available templates ( Czeibert et al., 2019 ;Datta et al., 2012 ;Liu et al., 2020 ;Nitzsche et al., 2019 ) are not in the same space and vary in orientation and origin, thus coordinates from one template cannot be applied to the other.Taken together, these findings can be a next step to further investigate the visual system for dogs, hopefully aiding future investigations of the visual system in dogs or studies focusing on visual paradigms (e.g., face processing Cuaya et al., 2016 ;Dilks et al., 2015 ;Hernández-Pérez et al., 2018 ;Szabó et al., 2020 ;Thompkins et al., 2018 ).

Conclusions
We present first evidence that the average visual-cortical BOLD signal in dogs peaks earlier than the human HRF model.Consequently, the significantly lower model fit suggests that the analysis of canine neuroimaging data using the human HRF leads to loss of power that cannot be accounted for by adding time and dispersion derivatives.We provide a first estimate of the cortical dog HRF resulting in significant activation increase in comparison to the human HRF and validated our results using an independent task.We hope that our findings spark new research that might challenge or add to our results.To increase transparency, we applied open-science practices throughout, and hope this will motivate and facilitate future investigations by other labs, leading to a joint effort to improve detection power in canine neuroimaging research.

Declaration of Competing Interest
The authors declare no competing financial interests.

Fig. 2 .
Fig. 2. Schematic description of the tailored data processing workflow for the canine neuroimaging data including (A) functional images and (B) the structural image.For the first level analysis (step 10, First level (GLMs)) functional data are masked using anatomical boundaries (normalized binary mask).Illustrative structural and functional images as well as binary mask were derived from one dog in the sample; tissue probability maps (TPMs) were from the canine breed-averaged atlas( Nitzsche et al., 2019 ).Numbers, in bold, describe the sequence of processing steps.Est., estimate; Res., resliced, GLM, general linear model.

Fig. 3 .
Fig. 3. Overview o of analyses underpinning the exploration of the average V1 BOLD signal in dogs, estimation of the tailored dog haemodynamic response function (HRF), and validation of the HRF in a second independent data set.(A) Data from the flickering checkerboard experiment served for the exploratory and estimation analysis (1) to extract the average V1 BOLD signal in dogs and visually compare it to the canonical human HRF model (i.e., the default HRF parameters provided by SPM12) using a finite impulse response (FIR) model, (2) to estimate a tailored dog HRF based on the empirical data, and (3) to compare model fits of the human and dog HRF in the visual cortex.On the whole-brain level, (4) we then performed first-level analyses using the human HRF, the human HRF along with time and dispersion derivatives (TDD) and the tailored dog HRF in order to (5) perform whole-brain group comparisons using one-sample and paired-sample t -tests across HRF models.(B) Results from (A) were then validated using the data from the face processing experiment as an independent validation data set.All analysis steps were identical to above, except for the dog HRF estimation.GLM, general linear model; BOLD, Blood Oxygenation Level Dependent Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Data curation, Writing -original draft, Writing -review & editing, Visualization, Project administration.Sabrina Karl: Investigation, Writing -review & editing.Ronald Sladky: Methodology, Formal analysis, Software, Writing -review & editing.Ludwig Huber: Resources, Writing -review & editing, Supervision, Funding acquisition.Claus Lamm: Conceptualization, Methodology, Resources, Writing -review & editing, Supervision, Funding acqui-sition.Isabella C. Wagner: Conceptualization, Methodology, Writingoriginal draft, Writing -review & editing, Supervision.

Table 1
Flickering checkerboard experiment: Task-related activation during visual stimulation

Table 2
Face processing experiment: Task-related activation during visual stimulation