NiMARE: Neuroimaging Meta-Analysis Research Environment

We present NiMARE (Neuroimaging Meta‑Analysis Research Environment; RRID:SCR_0173981), a Python library for neuroimaging meta‑analyses and meta‑ analysis‑related analyses. NiMARE is an open source, collaboratively‑developed package that implements a range of meta‑ analytic algorithms, including coor‑ dinate‑ and image‑based meta‑analyses, automated annotation, functional decoding, and meta‑analytic coactivation modeling. By consolidating meta‑ana‑ lytic methods under a common library and syntax,


INTRODUCTION
We introduce NiMARE (Neuroimaging Meta-Analysis Research Environment), a Python package for analyzing meta-analytic neuroimaging data. NiMARE is a new library developed as a component in a burgeoning open-source meta-analytic ecosystem for neuroimaging data, which currently includes Neurosynth, NeuroVault, NeuroQuery, and PyMARE.
While several libraries already exist for neuroimaging meta-analysis, these libraries are generally algorithmspecific, and are provided in a range of very different user interfaces, languages, and licenses. This variability may prevent meta-analysts from using the most appropriate algorithm for a given analysis. Further, having multiple meta-analysis algorithms available in one library facilitates Coordinate-Based Meta-Analysis (CBMA) is performed by creating a NiMARE Dataset with coordinate information stored in the Dataset.coordinates attribute, which is then used in a CBMA Estimator. This produces a MetaResult object with statistical maps, which can then be used in a Corrector object for multiple comparisons correction. Once the Corrector has been fitted, it will produce a corrected version of the MetaResult object, containing updated statistical maps. (B) Image-Based Meta-Analysis (IBMA) operates similarly to CBMA, except that IBMA Estimators use statistical maps stored in the Dataset.images attribute. (C) Meta-Analytic Coactivation Modeling (MACM) uses a region of interest to select coordinate-based studies within a Dataset, after which the standard CBMA workflow is performed. (D) Automated Annotation infers labels from textual (and sometimes other) data associated with the Dataset, as stored in the Dataset.texts attribute. The annotation functions produce labels which may be integrated into the Dataset as the Dataset.annotations attribute. (E) Functional decoding of continuous statistical maps operates similarly to discrete decoding, in that the input Dataset must have both coordinates and annotations attributes. The Dataset, along with an unthresholded statistical map to decode, is provided to the Decoder object, which then outputs measures of similarity or associativeness with each label. (F) Functional decoding of discrete inputs applies a selection criterion to a Dataset with both coordinates and annotations attributes, using a Decoder object. The decoding algorithm will output measures of similarity or associativeness with each label in the annotations. CC By 4.0: © Taylor Salo et al.

J U P Y T E R B O O K
enable users to employ the most appropriate algorithm for a given question without introducing a steep learning curve. This approach is modeled on the widely-used scikit-learn package, 2,3 which implements a large number of machine learning algorithms -all with simple, consistent interfaces. Regardless of the algorithm employed, data should be in the same format and the same class methods should be called to fit and/or generate predictions from the model.
To this end, we have adopted an object-oriented approach to NiMARE's core API that organizes tools based on the type of inputs and outputs they operate over. The key data structure is the Dataset class, which stores a range of neuroimaging data amenable to various forms of meta-analysis. There are two main types of tools that operate on a Dataset class. Transformer classes, as their name suggests, perform some transformation on a Dataset-i.e., they take a Dataset instance as input, and return a modified version of that Dataset instance as output (for example, with newly generated maps stored within the object). Estimator classes apply a meta-analytic algorithm to a Dataset and return a set of statistical images stored in a MetaResult container class. The key methods supported by each of these base classes, as well as the main arguments to those methods, are consistent throughout the hierarchy (e.g., all Transformer classes must implement a transform() method), minimizing the learning curve and ensuring a high degree of predictability for users. direct comparisons of methods. With NiMARE, we consolidate meta-analytic algorithms from a range of libraries and publications, and provide a common Python syntax and well documented application program interfaces. Additionally, NiMARE is a collaboratively-developed open source package, enabling researchers to contribute new methods not included in the current version.
In this paper, we describe NiMARE's aims, architecture and the functionality it supports-including tools for database extraction, automated annotation, meta-analysis, meta-analytic coactivation modeling, and functional decoding. The text is accompanied by extensive code samples and results (also available online in the form of Python scripts; https://github.com/NBCLab/nimare-paper with additional documentation in https://github.com/neurodatascience/meta_analysis_notebook), ensuring that users can follow along interactively.

NIMARE OVERVIEW
NiMARE is designed to be modular and object-oriented, with an interface that mimics popular Python libraries, including scikit-learn and nilearn. This standardized interface allows users to employ a wide range of meta-analytic algorithms without having to familiarize themselves with the idiosyncrasies of algorithm-specific tools. This lets users use whatever method is most appropriate for a given research question with minimal mental overhead from switching methods. Additionally, NiMARE emphasizes citability, with references in the documentation and citable boilerplate text that can be copied directly into manuscripts, in order to ensure that the original algorithm developers are appropriately recognized.
NiMARE works with Python versions 3.6 and higher, and can easily be installed with pip. Its source code is housed and version controlled in a GitHub repository at https://github.com/neurostuff/NiMARE.
NiMARE is under continued active development, and we anticipate that the user-facing API (application programming interface) may change over time. Our emphasis in this paper is thus primarily on reviewing the functionality implemented in the package and illustrating the general interface, and not on providing a detailed and static user guide that will be found within the package documentation.
Tools in NiMARE are organized into several modules, including nimare.meta, nimare.correct, nimare. annotate, nimare.decode, and nimare.workflows. In addition to these primary modules, there are several secondary modules for data wrangling and internal helper functions, including nimare.io, nimare.dataset, nimare.extract, nimare.stats, nimare.utils, and nimare. base. These modules are summarized in Application Programming Interface, as well as in Table 1.

Application programming interface
One of the principal goals of NiMARE is to implement a range of methods with a set of shared interfaces, to  While BrainMap is a semi-closed resource (i.e., a collaboration agreement is required to access the full database), registered users may search the database using the Sleuth search tool, in order to collect samples for meta-analyses. Sleuth can export these study collections as text files with coordinates. NiMARE provides a function to import data from Sleuth text files into the NiMARE Dataset format.
The function convert_sleuth_to_dataset() can be used to convert text files exported from Sleuth into NiMARE Datasets. Here, we convert two files from a previous publication by NiMARE contributors 18 into two separate Datasets.
We will also create a directory in which to save files that are generated within the book.

EXTERNAL META-ANALYTIC RESOURCES
Large-scale meta-analytic databases have made systematic meta-analyses of the neuroimaging literature possible. These databases combine results from neuroimaging studies, whether represented as coordinates of peak activations or unthresholded statistical images, with important study metadata, such as information about the samples acquired, stimuli used, analyses performed, and mental constructs putatively manipulated. The two most popular coordinate-based meta-analytic databases are BrainMap and Neurosynth, while the most popular image-based database is NeuroVault.
The studies archived in these databases may be either manually or automatically annotated-often with reference to a formal ontology or controlled vocabulary. Ontologies for cognitive neuroscience define what mental states or processes are postulated to be manipulated or measured in experiments, and may also include details of said experiments (e.g., the cognitive tasks employed), relationships between concepts (e.g., verbal working memory is a kind of working memory), and various other metadata that can be standardized and represented in a machine-readable form. [12][13][14] Some of these ontologies are very well-defined, such as expert-generated taxonomies designed specifically to describe only certain aspects of experiments and the relationships between elements within the taxonomy, while others are more loosely defined, in some cases simply building a vocabulary based on which terms are commonly used in cognitive neuroscience articles.

BrainMap
BrainMap 15-17 relies on expert annotators to label individual comparisons within studies according to its internally developed ontology, the BrainMap Taxonomy. 15 While this approach is likely to be less noisy than an automated annotation method using article text or imaging results to predict content, it is also subject to a number of limitations. First, there are simply not enough annotators to keep up with the ever-expanding literature. Second, any development of the underlying ontology has the potential to leave the database outdated. For example, if a new label is added to the BrainMap Taxonomy, then each study in the full BrainMap database needs to be evaluated for that label before that label can be properly Files generated by the book will be saved to /Users/taylor/ Documents/nbc/nimarepaper/ outputs os.makedirs("../outputs/", exist_ok=True) print(f"Files generated by the book will be saved to {os.path. abspath('../outputs/')}") from nimare import io sleuth_dset1 = io.convert_sleuth_to_dataset( os.path.join(data_path, "contrast-CannabisMinusControl_ space-talairach_sleuth.txt") ) sleuth_dset2 = io.convert_sleuth_to_dataset( os.path.join(data_path, "contrast-ControlMinusCannabis_ space-talairach_sleuth.txt") ) print(sleuth_dset1) print(sleuth_dset2) # Save the Datasets to files for future use sleuth_dset1.save(os.path.join(out_dir, "sleuth_dset1.pkl.gz")) sleuth_dset2.save(os.path.join(out_dir, "sleuth_dset2.pkl.gz")) Dataset(41 experiments, space='ale_2mm') Dataset (41 experiments, space='ale_2mm')

Neurosynth
Neurosynth 19 uses a combination of web scraping and text mining to automatically harvest neuroimaging studies from the literature and to annotate them based on term frequency within article abstracts. As a consequence of its relatively crude automated approach, Neurosynth has its own set of limitations. First, Neurosynth is unable to delineate individual comparisons within studies, and consequently uses the entire paper as its unit of measurement, unlike BrainMap. This risks conflating directly contrasted comparisons (e.g., A>B and B>A), as well as comparisons which have no relation to one another. Second, coordinate extraction and annotation are noisy. Third, annotations automatically performed by Neurosynth are also subject to error, although the reasons behind this are more nuanced and will be discussed later in this paper. Given Neurosynth's limitations, we recommend that it be used for casual, exploratory meta-analyses rather CC By 4.0: © Taylor Salo et al.

J U P Y T E R B O O K
Converting the large Neurosynth and NeuroQuery Datasets to NiMARE Dataset objects can be a very memory-intensive process. For the sake of this book, we show how to perform the conversions below, but actually load and use pre-converted Datasets. than for publication-quality analyses. Nevertheless, while individual meta-analyses should not be published from Neurosynth, many derivative analyses have been performed and published (e.g. [20][21][22][23]. As evidence of its utility, Neurosynth has been used to define a priori regions of interest (e.g. [24][25][26] or perform meta-analytic functional decoding (e.g. [27][28][29] , ) in many first-order (rather than meta-analytic) fMRI studies.
Here, we show code that would download the Neurosynth database from where it is stored (https:// github.com/neurosynth/neurosynth-data) and convert it to a NiMARE Dataset using fetch_neurosynth(), for the first step, and convert_neurosynth_to_dataset(), for the second. Many of the methods in NiMARE can be very time-consuming or memory-intensive. Therefore, for the sake of ensuring that the analyses in this article may be reproduced by as many people as possible, we will use a reduced version of the Neurosynth Dataset, only containing the first 500 studies, for those methods which may not run easily on the full database.
Here, we load a pre-generated version of the NeuroQuery Dataset.
In addition to a large corpus of coordinates, Neurosynth provides term frequencies derived from article abstracts that can be used as annotations.
One additional benefit to Neurosynth is that it has made available the coordinates for a large number of studies for which the study abstracts are also readily available. This has made the Neurosynth database a common resource upon which to build other automated ontologies. Data-driven ontologies which have been developed using the Neurosynth database include the GCLDA30 topic model and Deep Boltzmann machines. 31 NeuroQuery A related resource is NeuroQuery. 32 NeuroQuery is an online service for large-scale predictive meta-analysis. Unlike Neurosynth, which performs statistical inference and produces statistical maps, NeuroQuery is a supervised learning model and produces a prediction of the brain areas most likely to contain activations. These maps predict locations where studies investigating a given area (determined by the text prompt) are likely to produce activations, but they cannot be used in the same manner as statistical maps from a standard coordinate-based meta-analysis. In addition to this predictive meta-analytic tool, NeuroQuery also provides a new database of coordinates, text annotations, and metadata via an automated extraction approach that improves on Neurosynth's original methods.
While NiMARE does not currently include an interface to NeuroQuery's predictive meta-analytic method, there are functions for downloading the NeuroQuery database and converting it to NiMARE format, much like Neurosynth. The functions for downloading the NeuroQuery database and converting it to a Dataset are fetch_neuroquery() and convert_neurosynth_to_dataset(), respectively. We are able to use the same function for converting the database to a Dataset for NeuroQuery as Neurosynth because both databases store their data in the same structure. INFO a spatial Kernel to produce study-specific modeled activation maps, then combining those modeled activation maps into a sample-wise map, which is compared to a null distribution to evaluate voxel-wise statistical significance. Additionally, for each of the following approaches, except for specific coactivation likelihood estimation (SCALE), voxel-or cluster-level multiple comparisons correction may be performed using Monte Carlo simulations or FDR 36 correction. Basic multiple-comparisons correction methods (e.g., Bonferroni correction) are also supported.
CBMA kernels CBMA kernels are available as KernelTransformers in the nimare.meta.kernel module. There are three standard kernels that are currently available: MKDAKernel, KDAKernel, and ALEKernel. Each class may be configured with certain parameters when a new object is initialized. For example, MKDAKernel accepts an r parameter, which determines the radius of the spheres that will be created around each peak coordinate. ALEKernel automatically uses the sample size associated with each experiment in the Dataset to determine the appropriate full-width-at-half-maximum of its Gaussian distribution, as described in Eickhoff et al. 37 ; however, users may provide a constant sample_size or fwhm parameter when sample size information is not available within the Dataset metadata.
Here we show how these three kernels can be applied to the same Dataset.

NeuroVault
NeuroVault 33 is a public repository of user-uploaded, whole-brain, unthresholded brain maps. Users may associate their image collections with publications, and can annotate individual maps with labels from the Cognitive Atlas, which is the ontology of choice for NeuroVault. NiMARE includes a function, convert_neurovault_to_ dataset(), with which users can search for images in NeuroVault, download those images, and convert them into a Dataset object.

COORDINATE-BASED META-ANALYSIS
Coordinate-based meta-analysis (CBMA) is currently the most popular method for neuroimaging meta-analysis, given that the majority of fMRI papers currently report their findings as peaks of statistically significant clusters in standard space and do not release unthresholded statistical maps. These peaks indicate where significant results were found in the brain, and thus do not reflect an effect size estimate for each hypothesis test (i.e., each voxel) as one would expect for a typical meta-analysis. As such, standard methods for effect size-based meta-analysis cannot be applied. Over the past two decades, a number of algorithms have been developed to determine whether peaks converge across experiments in order to identify locations of consistent or specific activation associated with a given hypothesis. 34,35 Kernel-based methods evaluate convergence of coordinates across studies by first convolving foci with

J U P Y T E R B O O K
In NiMARE, the MKDA meta-analyses can be performed with the MKDADensity class. This class, like most other CBMA classes in NiMARE, accepts a null_method parameter, which determines how voxel-wise (uncorrected) statistical significance is calculated.
The null_method parameter allows two options: "approximate" or "montecarlo." The "approximate" option builds a histogram-based null distribution of summary-statistic values, which can then be used to determine the associated p-value for observed summary-statistic values (i.e., the values in the meta-analytic map). The "montecarlo" option builds a null distribution of summary-statistic values by randomly shuffling the coordinates the Dataset many times, and computing the summary-statistic values for each permutation. In general, the "montecarlo" method is slightly more accurate when there are enough permutations, while the "approximate" method is much faster.
Fitting the CBMA Estimator to a Dataset will produce p-value, z-statistic, and summary-statistic maps, but these are not corrected for multiple comparisons.
When performing a meta-analysis with the goal of statistical inference, you will want to perform multiple comparisons correction with NiMARE's Corrector classes. Please see the multiple comparisons correction chapter for more information.

Multilevel Kernel density analysis
Multilevel Kernel density analysis (MKDA) 38 is a Kernel-based method that convolves each peak from each study with a binary sphere of a set radius. These peak-specific binary maps are then combined into study-specific maps by taking the maximum value for each voxel. Study-specific maps are then averaged across the meta-analytic sample. This averaging is generally weighted by studies' sample sizes, although other covariates may be included, such as weights based on the type of inference (random or fixed effects) employed in the study's analysis. An arbitrary threshold is generally employed to zero-out voxels with very low values, and then a Monte Carlo procedure is used to assess statistical significance, either at the voxel or cluster level. from nimare.meta.cbma import mkda mkdad_meta = mkda.MKDADensity(null_method="approximate") mkdad_results = mkdad_meta.fit(sleuth_dset1)

The MetaResult class
Fitting an Estimator to a Dataset produces a MetaResult object. The MetaResult class is a light container holding the different statistical maps produced by the Estimator. This result is also retained as an attribute in the Estimator.

J U P Y T E R B O O K
We will also save the Estimator itself, which we will reuse when we get to multiple comparisons correction.
Since this is a Kernel-based algorithm, the Kernel transformer is an optional input to the meta-analytic estimator, and can be controlled in a more fine-grained manner.
We can save the statistical maps to an output directory as gzipped nifti files, with a prefix. Here, we will save all of the statistical maps with the MKDADensity prefix.
The maps attribute is a dictionary containing statistical map names and associated numpy arrays. mkdad_img = mkdad_results.get_map("z", return_type="image") print(mkdad_img) pprint(mkdad_results.maps) {'p': array([1., 1., 1., ..., 1., 1., 1.]), 'stat': array([0., 0., 0., ..., 0., 0., 0.]), 'z': array([0., 0., 0., ..., 0., 0., 0.])} <class 'nibabel.nifti1.Nifti1Image'> data shape ( These arrays can be transformed into image-like objects using the masker attribute. We can also use the get_map method to get that image object. An alternative to the density-based approaches (i.e., MKDA, KDA, ALE, and SCALE) is the MKDA Chi-squared extension. 38 Although still a Kernel-based method in which foci are convolved with a binary sphere and combined within studies, this approach uses voxel-wise Chi-squared tests to assess both consistency (i.e., higher convergence of foci within the meta-analytic sample than expected by chance) and specificity (i.e., higher convergence of foci within the meta-analytic sample than detected in an unrelated dataset) of activation. Such an analysis also requires access to a reference meta-analytic sample or database of studies. For example, to perform a Chi-squared analysis of working memory studies, the researcher will also need a comprehensive set of studies which did not manipulate working memory-ideally one that is matched with the working memory study set on all relevant attributes except the involvement of working memory.
Activation likelihood estimation ALE 41-43 assesses convergence of peaks across studies by first generating a modeled activation map for each study, in which each of the experiment's peaks is convolved with a 3D Gaussian distribution determined by the experiment's sample size, and then by combining these modeled activation maps across studies into an ALE map, which is compared with an empirical null distribution to assess voxel-wise statistical significance.
Specific coactivation likelihood estimation SCALE 44 is an extension of the ALE algorithm developed for meta-analytic coactivation modeling (MACM) analyses. Rather than comparing convergence of foci within the sample to a null distribution derived under the assumption of spatial randomness within the brain, SCALE assesses whether the convergence at each voxel is greater than in the general literature. Each voxel in the brain is assigned a null distribution determined based on the base rate of activation for that voxel across an existing coordinate-based meta-analytic database. This approach allows for the generation of a statistical map for the sample, but no methods for multiple comparisons correction have yet been developed. While this method was developed to support analysis of joint activation or "coactivation" patterns, it is generic and can be applied to any CBMA; see Derivative Analyses. # Retain the specificity analysis's z-statistic map for later use mkdac_img = mkdac_results.get_map("z_desc-specificity", return_type="image")

Comparing algorithms
Here, we load the z-statistic map from each of the CBMA estimators we have used throughout this chapter and plot them all side by side.  A number of other coordinate-based meta-analysis algorithms exist, which are not yet implemented in NiMARE. We describe these algorithms briefly in Future Directions.

IMAGE-BASED META-ANALYSIS
Image-based meta-analysis (IBMA) methods perform a meta-analysis directly on brain images (either whole-brain or partial) rather than on extracted peaks. On paper, IBMA is superior to CBMA in virtually all respects, as the availability of analysis-level parameter and variance estimates at all analyzed voxels allows researchers to use the full complement of standard meta-analysis techniques, instead of having to resort to Kernel-based or other methods that require additional spatial assumptions. In principle, given a set of maps that contains no missing values (i.e., where there are k valid pairs of parameter and variance estimates at each voxel), one can simply conduct a voxel-wise version of any standard meta-analysis or meta-regression method commonly used in other biomedical or social science fields.
In practice, the utility of IBMA methods has historically been quite limited, as unthresholded statistical maps have been unavailable for the vast majority of neuroimaging studies. However, the introduction and rapid adoption of NeuroVault, 33 a database for unthresholded statistical images, has made image-based meta-analysis increasingly viable. Although coverage of the literature remains limited, and IBMAs of maps drawn from the NeuroVault database are likely to omit at least some (and in some cases most) relevant studies due to limited metadata, we believe the time is ripe for researchers to start including both CBMAs and IBMAs in published meta-analyses, with the aspirational goal of eventually transitioning exclusively to the latter. To this end, NiMARE supports a range of different IBMA methods, including a number of estimators of the gold standard mixed-effects meta-regression model, as well as several alternative estimators suitable for use when some of the traditional inputs are unavailable.
NiMARE's IBMA Estimators are light wrappers around classes from PyMARE, a library for standard (i.e., nonneuroimaging) meta-analyses developed by the same team as NiMARE.
In the optimal situation, meta-analysts have access to both contrast (i.e., parameter estimate) maps and their associated standard error maps for a number of studies. With these data, researchers can fit the traditional random-effects CC By 4.0: © Taylor Salo et al.

Transforming images
Researchers may share their statistical maps in many forms, some of which are direct transformations of one another. For example, researchers may share test statistic maps with z-statistics or t-statistics, and, as long as we know the degrees of freedom associated with the t-test, we can convert between the two easily. To that end, NiMARE includes a class, ImageTransformer, which will calculate target image types from available ones, as long as the available images are compatible with said transformation.
Here, we use ImageTransformer to calculate z-statistic and variance maps for all studies with compatible images. This allows us to apply more image-based meta-analysis algorithms to the Dataset. Now that we have filled in as many gaps in the Dataset as possible, we can start running meta-analyses. We will start with a DerSimonian-Laird meta-analysis (DerSimonianLaird). meta-regression model using one of several methods that vary in the way they estimate the between-study variance (τ 2 ). Currently, supported estimators include the DerSimonian-Laird method, 45 the Hedges method, 46 and maximum-likelihood (ML) and restricted maximum-likelihood (REML) approaches. NiMARE can also perform fixed-effects meta-regression via weighted least-squares, although there are few IBMA scenarios where a fixed-effects analysis would be indicated. It is worth noting that the non-likelihood-based estimators (i.e., DerSimonian-Laird and Hedges) have a closed-form solution and are implemented in an extremely efficient way in NiMARE (i.e., computation is performed on all voxels in parallel). However, these estimators also produce more biased estimates under typical conditions (e.g., when sample sizes are very small), implying a tradeoff from the user's perspective.
Alternatively, when users only have access to contrast maps and associated sample sizes, they can use the supported sample size-based likelihood estimator, which assumes that within-study variance is constant across studies, and uses maximum-likelihood or restricted maximum-likelihood to estimate between-study variance, as described in Sangnawakij et al.. 47 When users have access only to contrast maps, they can use the permuted OLS estimator, which uses ordinary least squares and employs a max-type permutation scheme for family-wise error correction 48,49 that has been validated on neuroimaging data 50 and relies on the nilearn library.

J U P Y T E R B O O K
If you ignore the prefix, which was specified in the call to MetaResult.save_maps, the maps all have a common naming convention. The maps from the original meta-analysis (before multiple comparisons correction) Statistical maps saved by NiMARE MetaResults automatically follow a naming convention based loosely on the Brain Imaging Data Standard (BIDS). Let's take a look at the files created by the FWECorrector.

MULTIPLE COMPARISONS CORRECTION
In NiMARE, multiple comparisons correction is separated from each CBMA and IBMA Estimator so that any number of relevant correction methods can be applied after the Estimator has been fit to the Dataset. Some correction options, such as the montecarlo option for FWE correction, are designed to work specifically with a given Estimator (and are indeed implemented within the Estimator class, and only called by the Corrector).
Correctors are divided into two subclasses: FWECorrectors, which correct based on family-wise error rate, and FDRCorrectors, which correct based on FDR.
All Correctors are initialized with a number of parameters, including the correction method that will be used. After that, you can use the transform method on a MetaResult object produced by a CBMA or IBMA Estimator to apply the correction method. This will return an updated MetaResult object, with both the statistical maps from the original MetaResult, as well as new, corrected maps.
Here we will apply both FWE and FDR correction to results from a MKDADensity meta-analysis, performed back in multilevel Kernel density analysis.
In the following example, we use 5000 iterations for Monte Carlo FWE correction. Normally, one would use at least 10,000 iterations, but we reduced this for the sake of speed.   37 . In this approach, two groups of experiments (A and B) are compared using a group assignment randomization procedure in which voxel-wise null distributions are generated by randomly reassigning experiments between the two groups and calculating ALE-difference scores for each permutation. Real ALE-difference scores (i.e., the ALE values for one group minus the ALE values for the other) are compared against these null distributions to determine voxel-wise significance. In the original implementation of the algorithm, this procedure is performed separately for a group A > B contrast and a group B > A contrast, where each contrast is limited to voxels that were significant in the first group's original meta-analysis. In NiMARE, we use an adapted version of the subtraction analysis method in ALESubtraction. The NiMARE implementation analyzes all voxels, rather than only those that show a significant effect of A alone or B alone as in the original implementation.
Running a subtraction analysis with the standard number of iterations (10,000) may require more than 4 GB of RAM, which is NeuroLibre's limit. We will instead use only 1000 iterations so that the analysis will run successfully on NeuroLibre's server. For publication-quality subtraction analyses, we recommend using the standard 10,000 iterations.
are simply named according to the values contained in the map (e.g., z, stat, p).
Maps generated by the correction method, however, use a series of key-value pairs to indicate how they were generated. The corr key indicates whether FWE or FDR correction was applied. The method key reflects the correction method employed, which was defined by the method parameter used to create the Corrector. The level key simply indicates if the map was corrected at the voxel or cluster level. Finally, the desc key reflects any necessary description that goes beyond what is already covered by the other entities.

DERIVATIVE ANALYSES
Meta-analytic databases and algorithms may be employed for derivative analyses, including subtraction analysis, meta-analytic coactivation modeling (MACM), meta-analytic clustering, coactivation-based parcellation (CBP), meta-analytic independent component analysis (meta-ICA), semantic model development, and meta-analytic functional decoding. In this part, we describe the derivative analyses implemented in NiMARE and include examples of use cases.

META-ANALYTIC SUBTRACTION ANALYSIS
Subtraction analysis refers to the voxel-wise comparison of two meta-analytic samples. In image-based meta-analysis, comparisons between groups of maps can generally be accomplished within the standard meta-regression framework (i.e., by adding a covariate that codes for group membership). However, coordinate-based subtraction analysis requires special extensions for CBMA algorithms.
Subtraction analysis to compare the results of two ALE meta-analyses was originally implemented by 17 and Fig. 7. An array of plots of the corrected statistical maps produced by the different multiple comparisons correction methods. from nimare import meta kern = meta.kernel.ALEKernel() sub_meta = meta.cbma.ale.ALESubtraction(kernel_ transformer=kern, n_iters=1000) sub_results = sub_meta.fit(sleuth_dset1, sleuth_dset2) Alternatively, MKDA Chi-squared analysis is inherently a subtraction analysis method, in that it compares foci from two groups of studies. Generally, one of these groups is a sample of interest, while the other is a meta-analytic database (minus the studies in the sample). With this setup, meta-analysts can infer whether there is greater convergence of foci in a voxel as compared to the baseline across the field (as estimated with the meta-analytic database), much like SCALE. However, if the database is replaced with a second sample of interest, the analysis ends up comparing convergence between the two groups. CC By 4.0: © Taylor Salo et al.

J U P Y T E R B O O K
limit. Therefore, we will further reduce the dataset to its first 500 studies, in order to run the meta-analysis successfully on NeuroLibre's server. For publication-quality analyses, we would recommend using the entire dataset.

META-ANALYTIC COACTIVATION MODELING
Meta-analytic coactivation modeling (MACM), 55-57 also known as meta-analytic connectivity modeling, uses meta-analytic data to measure co-occurrence of activations between brain regions providing evidence of functional connectivity of brain regions across tasks. In coordinate-based MACM, whole-brain studies within the database are selected based on whether or not they report at least one peak in a region of interest specified for the analysis. These studies are then subjected to a meta-analysis, often comparing the selected studies to those remaining in the database. In this way, the significance of each voxel in the analysis corresponds to whether there is greater convergence of foci at the voxel among studies, which also report foci in the region of interest than those which do not.
MACM results have historically been accorded a similar interpretation to task-related functional connectivity (e.g. 58,59 ), although this approach is quite removed from functional connectivity analyses of task fMRI data (e.g., beta-series correlations, psychophysiological interactions, or even seed-to-voxel functional connectivity analyses on task data). Nevertheless, MACM analyses do show high correspondence with resting-state functional connectivity. 60 MACM has been used to characterize the task-based functional coactivation of the cerebellum, 61 lateral prefrontal cortex, 62 fusiform gyrus, 63 and several other brain regions.
Within NiMARE, MACMs can be performed by selecting studies in a Dataset based on the presence of activation within a target mask or coordinate-centered sphere. While some algorithms, such as SCALE, may have been designed with MACMs in mind, in practice MACMs may be performed with any valid Estimator.
In this section, we will perform two MACMs -one with a target mask and one with a coordinate-centered sphere. For the former, we use get_studies_by_mask(). For the latter, we use get_studies_by_coordinate().  Once the Dataset has been reduced to studies with coordinates within the mask or sphere requested, any of the supported CBMA Estimators can be run. from nimare import meta meta_amyg = meta.cbma.ale.ALE(kernel__sample_size=20) results_amyg = meta_amyg.fit(dset_amygdala) meta_sphere = meta.cbma.ale.ALE(kernel__sample_size=20) results_sphere = meta_sphere.fit(dset_sphere) The amygdala dataset includes more than 1300 studies. Running a meta-analysis on such a large dataset may require more than 4 GB of RAM, which is NeuroLibre's

J U P Y T E R B O O K
NiMARE has the function generate_counts() to extract n-grams from text. This method produces either term counts or term frequency-inverse document frequency (tf-idf) values for each of the studies in a Dataset.

AUTOMATED ANNOTATION
As mentioned in the discussion of BrainMap (BrainMap), manually annotating studies in a meta-analytic database can be a time-consuming and labor-intensive process. To facilitate more efficient (albeit lower-quality) annotation, NiMARE supports a number of automated annotation approaches. These include N-gram term extraction, Cognitive Atlas term extraction and hierarchical expansion, LDA, and GCLDA.
NiMARE users may download abstracts from PubMed as long as study identifiers in the Dataset correspond to PubMed IDs (as in Neurosynth and NeuroQuery). Abstracts are much more easily accessible than full article text, so most annotation methods in NiMARE rely on them.
Below, we use the function download_abstracts() to download abstracts for the Neurosynth Dataset. This will attempt to extract metadata about each study in the Dataset from PubMed, and then add the abstract available on Pubmed to the Dataset's texts attribute, under a new column names "abstract".
download_abstracts() only works when there is internet access. Since this book will often be built on nodes without internet access, we will share the code used to download abstracts but will actually load and use a pre-generated version of the Dataset.

Cognitive Atlas term extraction and hierarchical expansion
Cognitive Atlas term extraction leverages the structured nature of the Cognitive Atlas in order to extract counts for individual terms and their synonyms in the ontology, as well as to apply hierarchical expansion to these counts based on the relationships specified between terms. This method produces both basic term counts and expanded term counts based on the weights applied to different relationship types present in the ontology.
First, we must use download_cognitive_atlas() to download the current version of the Cognitive Atlas ontology. This includes both information about individual terms in the ontology and asserted relationships between those terms.
NiMARE will automatically attempt to extrapolate likely alternate forms of each term in the ontology, in order to make extraction easier. For an example, see Fig. 11.

J U P Y T E R B O O K
The most important products of training the LDAModel object is its distributions_ attribute. LDAModel.distribu-tions_ is a dictionary containing arrays and DataFrames created from training the model. We are particularly interested in the p_topic_g_word_df distribution, which is a pandas DataFrame in which each row corresponds to a topic and each column corresponds to a term (n-gram) extracted from the Dataset's texts. The cells contain weights indicating the probability distribution across terms for each topic.
Additionally, the LDAModel updates the Dataset's annotations attribute, by adding columns corresponding to each of the topics in the model. Each study in the Dataset thus receives a weight for each topic, which can be used to select studies for topic-based meta-analyses or functional decoding.
Let's take a look at the results of the model training. First, we will reorganize the DataFrame a bit to show the top 10 terms for each of the first 10 topics.
Latent Dirichlet allocation LDA 64 was originally combined with meta-analytic neuroimaging data in. 23 LDA is a generative topic model which, for a text corpus, builds probability distributions across documents and words. In LDA, each document is considered a mixture of topics. This works under the assumption that each document was constructed by first randomly selecting a topic based on the document's probability distribution across topics, and then randomly selecting a word from that topic based on the topic's probability distribution across words. While this is not a useful generative model for producing documents, LDA is able to discern cohesive topics of related words. Poldrack et al. 23 were able to apply LDA to full texts from neuroimaging articles in order to develop cognitive neuroscience-related topics and to run topic-wise meta-analyses. This method produces two sets of probability distributions: (1) the probability of a word given topic and (2) the probability of a topic given article.
Here, we train an LDA model (LDAModel) on the first 500 studies of the Neurosynth Dataset, with 50 topics in the model. The GCLDAModel retains the relevant probability distributions in the form of numpy arrays, rather than pandas DataFrames. However, for the topic-term weights (p_word_g_topic_), the data are more interpretable as a DataFrame, so we will create one. We will also reorganize the raw DataFrame to show the top 10 terms for each of the first 10 topics.

Generalized correspondence latent Dirichlet allocation
GCLDA is a recently-developed algorithm that trains topics on both article abstracts and coordinates. 30 GCLDA assumes that topics within the fMRI literature can also be localized to brain regions, in this case modeled as three-dimensional Gaussian distributions. These spatial distributions can also be restricted to pairs of Gaussians that are symmetric across brain hemispheres. This method produces two sets of probability distributions: the probability of a word given topic (GCLDAModel.p_word_g_ topic_), and the probability of a voxel given topic (GCLDAModel.p_voxel_g_topic_).
Here we train a GCLDA model (GCLDAModel) on the first 500 studies of the Neurosynth Dataset. The model will include 50 topics, in which the spatial distribution for each topic will be defined as having two Gaussian distributions that are symmetrically localized across the longitudinal fissure.
GCLDAModel generally takes a very long time to train.
Below, we show how one would train a GCLDA model. However, we will load a pretrained model instead of actually training the model.

META-ANALYTIC FUNCTIONAL DECODING
Functional decoding performed with meta-analytic data, refers to methods which attempt to predict mental states from neuroimaging data using a large-scale meta-analytic database. 65 Such analyses may also be referred to as "informal reverse inference", 66 "functional characterization analysis", [67][68][69] "open-ended decoding", 30 or simply "functional decoding". [70][71][72] While the terminology is far from standardized, we will refer to this method as meta-analytic functional decoding in order to We also want to see how the topic-voxel weights render on the brain, so we will simply unmask the p_vox-el_g_topic_ array with the Dataset's masker.

J U P Y T E R B O O K
This approach can also be applied to an image-based database like NeuroVault, either by correlating input data with meta-analyzed statistical maps, or by deriving distributions of correlation coefficients by grouping statistical maps in the database according to label. Using these distributions, it is possible to statistically compare labels in order to assess label significance. NiMARE includes methods for both correlation-based decoding and correlation distribution-based decoding, although the correlation-based decoding is better established and should be preferred over the correlation distribution-based decoding. As such, we will only show the CorrelationDecoder here.
CorrelationDecoder currently runs very slowly. We strongly recommend running it on a subset of labels within the Dataset. It is also quite memory-intensive.
In this example, we have only run the decoder using features appearing in >10% and <90% of the first 500 studies in the Dataset. Additionally, we have pregenerated the results and will simply show the code that would generate those results, as the decoder requires too much memory for NeuroLibre's servers.
distinguish it from alternative methods like multivariate decoding and model-based decoding. 66 Meta-analytic functional decoding is often used in conjunction with MACM, meta-analytic clustering, meta-analytic parcellation, and meta-ICA, in order to characterize resulting brain regions, clusters, or components. Meta-analytic functional decoding models have also been extended for the purpose of meta-analytic functional encoding, wherein text is used to generate statistical images. 30,73,74 Four common approaches are correlation-based decoding, dot-product decoding, weight-sum decoding, and Chi-square decoding. We will first discuss continuous decoding methods (i.e., correlation and dot-product), followed by discrete decoding methods (weight-sum and Chi-square).

Decoding continuous inputs
When decoding unthresholded statistical maps (such as Fig. 16), the most common approaches are to simply correlate the input map with maps from the database, or to compute the dot product between the two maps. In Neurosynth, meta-analyses are performed for each label (i.e., term or topic) in the database and then the input image is correlated with the resulting unthresholded statistical map from each meta-analysis. Performing statistical inference on the resulting correlations is not straightforward, however, as voxels display strong spatial correlations, and the true degrees of freedom are consequently unknown (and likely far smaller than the nominal number of voxels). In order to interpret the results of this decoding approach, users typically select some arbitrary number of top correlation coefficients ahead of time, and use the associated labels to describe the input map. However, such results should be interpreted with great caution. Fig. 16. The unthresholded statistical map that will be used for continuous decoding. from nimare import decode, meta corr_decoder = decode.continuous.CorrelationDecoder( frequency_threshold=0.001, meta_estimator=meta.MKDADensity(kernel_ transformer=kern, memory_limit=None), target_image="z", features=target_features, memory_limit="500mb", ) corr_decoder.fit(neurosynth_dset_first500) corr_df = corr_decoder.transform(continuous_map)

J U P Y T E R B O O K
Because the ROIAssociationDecoder generates modeled activation maps for all of the experiments in the Dataset, we will only fit this decoder to the first 500 studies.
A more theoretically driven approach to ROI decoding is to use Chi-square-based methods. The two methods that use Chi-squared tests are the BrainMap decoding method and an adaptation of Neurosynth's meta-analysis method.
In both Chi-square-based methods, studies are first selected from a coordinate-based database according to some criterion. For example, if decoding a region of interest, users might select studies reporting at least

Decoding discrete inputs
Decoding regions of interest (ROIs) requires a different approach than decoding unthresholded statistical maps. One simple approach, used by GCLDA and implemented in the function gclda_decode_roi(), simply sums the P(topic|voxel) distribution across all voxels in the ROI in order to produce a value associated with each topic for the ROI. These weight sum values are arbitrarily scaled and cannot be compared across ROIs. We will not show this method because of its simplicity and the fact that it can only currently be applied to a GCLDA model.
Before we dig into the other decoding methods are available, let's take a look at the ROI we want to decode.
One method which relies on correlations, much like the continuous correlation decoder, is the ROI association decoding method (ROIAssociationDecoder), originally implemented in the Neurosynth Python library. In this method, each study with coordinates in the dataset is convolved with a Kernel transformer to produce a modeled activation map. The resulting modeled activation maps are then masked with a region of interest (i.e., the target of the decoding), and the values are averaged within the ROI. These averaged modeled activation values are then correlated with the term weights for all labels in the dataset. This decoding method produces a single correlation coefficient for each of the dataset's labels. import pandas as pd corr_df = pd.read_table( os.path.join(data_path, "correlation_decoder_results.tsv"), index_col="feature", )

J U P Y T E R B O O K
This decoding method produces four outputs for each label. First, the distribution of studies in the sample with the label are compared with the distributions of other labels within the sample. This consistency analysis produces both a measure of statistical significance (i.e., a P value) and a measure of effect size (i.e., the likelihood of being selected given the presence of the label). Next, the studies in the sample are compared with the studies in the rest of the database. This specificity analysis produces a P value and an effect size measure of the posterior probability of having the label given selection into the sample. A detailed algorithm description is presented in Appendix I: BrainMap Discrete Decoding.

Neurosynth method
The implementation of the MKDA Chi-squared meta-analysis method used by Neurosynth is quite similar to BrainMap's method for decoding, if applied to annotations instead of modeled activation values. This method, implemented in NeurosynthDecoder, compares the distributions of studies with each label within the sample against those in a larger database, but, unlike the BrainMap method, does not take foci into account. For this reason, the Neurosynth method would likely be more appropriate for selection criteria not based on ROIs (e.g., for characterizing meta-analytic groupings one coordinate within 5 mm of the ROI. Metadata (such as ontological labels) for this subset of studies are then compared with those of the remaining, unselected portion of the database in a confusion matrix. For each label in the ontology, studies are divided into four groups: selected and label-positive (SS+L+), selected and label-negative (SS+L −), unselected and label-positive (SS-L+), and unselected and label-negative (SS-L−). Each method then compares these groups in order to evaluate both consistency and specificity of the relationship between the selection criteria and each label, which are evaluated in terms of both statistical significance and effect size.

BrainMap method
The BrainMap discrete decoding method, implemented in BrainMapDecoder, compares the distributions of studies with each label within the sample against those in a larger database while accounting for the number of foci from each study. Broadly speaking, this method assumes that the selection criterion is associated with one peak per study, which means that it is likely only appropriate for selection criteria based around foci, such as ROIs. One common analysis, meta-analytic clustering, involves dividing studies within a database into meta-analytic groupings based on the spatial similarity of their modeled activation maps (i.e., study-wise pseudostatistical maps produced by convolving coordinates with a Kernel). The resulting sets of studies are often functionally decoded in order to build a functional profile associated with each meta-analytic grouping. While these groupings are defined as subsets of the database, they are not selected based on the location of an individual peak, and so weighting based on the number of foci would be inappropriate. brainmap_decoder = decode.discrete.BrainMapDecoder( frequency_threshold=0.001, u=0.05, correction="fdr_bh", ) brainmap_decoder.fit(neurosynth_dset) brainmap_df = brainmap_decoder.transform(amygdala_ids) Fig. 20. The top 10 terms, sorted by reverse-inference posterior probability, from the BrainMap Chi-squared decoding method.

J U P Y T E R B O O K
In both methods, the database acts as an estimate of the underlying distribution of labels in the real world, such that the probability of having a peak in an ROI given the presence of the label might be interpreted as the probability of a brain activating a specific brain region given that the individual is experiencing a given mental state. This is a very poor interpretation, given that any database of neuroimaging results will be skewed more toward the interests of the field than the distribution of mental states or processes experienced by humans, which is why decoding must be interpreted with extreme caution. It is important not to place too much emphasis on the results of functional decoding analyses, although they are very useful in that they can provide a quantitative estimate behind the kinds of interpretations generally included in discussion sections that are normally only backed by informal literature searches or prior knowledge.
The meta-analytic functional decoding methods in NiMARE provide a very rudimentary approach for open-ended decoding (i.e., decoding across a very large range of mental states) that can be used with resources like NeuroVault. However, standard classification methods have also been applied to datasets from NeuroVault (e.g. 75 ), although these methods do not fall under NiMARE's scope.

FUTURE DIRECTIONS
NiMARE's mission statement encompasses a range of tools that have not yet been implemented in the package. In the future, we plan to incorporate a number of from a meta-analytic clustering analysis). However, the Neurosynth method requires user-provided information that BrainMap does not. Namely, in order to estimate probabilities for the consistency and specificity analyses with Bayes' Theorem, the Neurosynth method requires a prior probability of a given label. Typically, a value of 0.5 is used (i.e., the estimated probability that an individual is undergoing a given mental process described by a label, barring any evidence from neuroimaging data, is predicted to be 50%). This is, admittedly, a poor prediction, which means that probabilities estimated based on this prior are not likely to be accurate, though they may still serve as useful estimates of effect size for the analysis.
Like the BrainMap method, this method produces four outputs for each label. For the consistency analysis, this method produces both a P value and a conditional probability of selection given the presence of the label and the prior probability of having the label. For the specificity analysis, the Neurosynth method produces both a P value and a posterior probability of presence of the label given selection and the prior probability of having the label. A detailed algorithm description is presented in Appendix II: Neurosynth Discrete Decoding. as the products of stochastic models sampling some underlying distribution. Some of these methods include the Bayesian hierarchical independent cluster process model (BHICP), 78 the Bayesian spatially adaptive binary regression model (SBR), 79 the hierarchical Poisson/ Gamma random field model (HPGRF/BHPGM), 80 the spatial Bayesian latent factor regression model (SBLFRM), 81 and the random effects log Gaussian Cox process model (RFX-LGCP). 82 Although these methods are much more computationally intensive than Kernel-based algorithms, they provide information that Kernel-based methods cannot, such as spatial confidence intervals, effect size estimate confidence intervals, and the facilitation of reverse inference. A more thorough description of the relative strengths of model-based algorithms is presented in, 34 but these benefits, at the cost of computational efficiency, have led the authors to recommend Kernel-based methods for exploratory analysis and model-based methods for confirmatory analysis.
NiMARE does not currently implement any model-based CBMA algorithms, although there are plans to include at least one in the future.

Additional automated annotation methods
Several papers have used article text to automatically annotate meta-analytic databases with a range of methods. Alhazmi et al. 83 used a combination of correspondence analysis and clustering to identify subdomains in the cognitive neuroscience literature from Neurosynth text. Monti et al. 31 generated word and document embeddings in vector space from Neurosynth abstracts using deep Boltzmann machines, which allowed them to cluster words based on semantic similarity or to describe Neurosynth articles in terms of these word clusters. Nunes 74 used article abstracts from Neurosynth to represent documents as dense vectors as well. These document vectors were then used in conjunction with corresponding coordinates to cluster words into categories, essentially annotating Neurosynth articles according to a new "ontology" based on both abstract text and coordinates.
Meta-analytic databases may also be used in conjunction with existing ontologies in order to redefine mental states or to refine the ontology. For example, Yeo et al. 84 used the Author-Topic model to identify connections between paradigm classes (i.e., tasks) and behavioral domains (i.e., mental states) from the BrainMap Taxonomy using the BrainMap database. Other examples include using meta-analytic clustering, combined with functional decoding, to identify groups of terms/ labels that co-occur in neuroimaging data, in order to determine if the divisions currently employed in existing ontologies accurately reflect how mental states are separated in the mind (e.g. [85][86][87] ). additional methods. Here, we briefly describe several of these tools.

Integration with external databases
A resource that may ultimately be integrated with Neurosynth is Brainspell. Brainspell is a port of the Neurosynth database in which users may manually annotate the automatically extracted study information. The goal of Brainspell is to crowdsource annotation through both expert and nonexpert annotators, which would address the primary weaknesses of BrainMap (i.e., slow growth) and Neurosynth (i.e., noise in data extraction and annotation). Annotations in Brainspell may use labels from the Cognitive Paradigm Ontology (CogPO), 14 an ontology adapted from the BrainMap Taxonomy, or from the Cognitive Atlas, 76 a collaboratively generated ontology built by contributions from experts across the field of cognitive science. Users may also correct the coordinates extracted by Neurosynth, which may suffer from extraction errors, and may add important metadata like the number of subjects associated with each comparison in each study.
Brainspell has suffered from low growth, which is why its annotations have not been integrated back into Neurosynth, but a new frontend tool for Brainspell, geared toward meta-analysts, has been developed called metaCurious. MetaCurious facilitates neuroimaging meta-analyses by allowing users to iteratively perform literature searches and to annotate rejected articles with reasons for exclusion. In addition to these features, metaCurious users can annotate studies with the same labels and metadata as Brainspell, but with the features geared toward meta-analysts site usage is expected to exceed that of Brainspell proper.
While NiMARE does not natively include tools for interacting with Brainspell or metaCurious, there are plans to support NiMARE-format exports in both services.

Seed-based D-Mapping
Seed-based d-mapping (SDM), 77 previously known as signed differential mapping, is a relatively recently developed approach designed to incorporate both peak-specific effect size estimates and unthresholded images, when available. In SDM, foci are convolved with an anisotropic Kernel which, unlike the Gaussian and spherical kernels employed in ALE and MKDA, respectively, accounts for tissue type to provide more empirically realistic spatial models of the clusters from the original studies. The SDM algorithm is not yet supported in NiMARE, given the difficulty in implementing an algorithm without access to code.

Model-based CBMA
Model-based algorithms, a recent alternative to Kernel-based approaches, model foci from studies Surface-based meta-analysis Currently, NiMARE only supports volumetric meta-analysis. However, we eventually plan to support surface-based meta-analyses, which may require new coordinate-based meta-analysis algorithms, as the current methods do not generalize to surfaces.

SUMMARY
The advent of open, large-scale databases of neuroimaging results, whether full, unthresholded statistical maps, or simple coordinates, has allowed for the development of a wide variety of methods for performing fMRI metaanalyses and related analyses. These methods are often (but not always) released as tools for the community to use, written in a range of languages and with highly variable interfaces. As a consequence, it is difficult for metaanalysts to keep abreast of the current literature and to employ whatever method is most appropriate to address a given question. NiMARE provides a centralized repository for these tools, which will make it easier for researchers to keep track of new methods and also provides said tools with extensive documentation and a standardized programmatic interface, which will allow researchers to use whatever tool is most appropriate for their research, without unnecessarily steep learning curves.
Given that NiMARE is open source and collaboratively developed on GitHub, methodologists may contribute their own meta-analytic algorithms directly, or interested third parties may implement these algorithms using papers or external tools as a basis for understanding the methods.