Evaluating the performance of tools used to call minority variants from whole genome short-read data

Background: High-throughput whole genome sequencing facilitates investigation of minority virus sub-populations from virus positive samples. Minority variants are useful in understanding within and between host diversity, population dynamics and can potentially assist in elucidating person-person transmission pathways. Several minority variant callers have been developed to describe low frequency sub-populations from whole genome sequence data. These callers differ based on bioinformatics and statistical methods used to discriminate sequencing errors from low-frequency variants. Methods: We evaluated the diagnostic performance and concordance between published minority variant callers used in identifying minority variants from whole-genome sequence data from virus samples. We used the ART-Illumina read simulation tool to generate three artificial short-read datasets of varying coverage and error profiles from an RSV reference genome. The datasets were spiked with nucleotide variants at predetermined positions and frequencies. Variants were called using FreeBayes, LoFreq, Vardict, and VarScan2. The variant callers’ agreement in identifying known variants was quantified using two measures; concordance accuracy and the inter-caller concordance. Results: The variant callers reported differences in identifying minority variants from the datasets. Concordance accuracy and inter-caller concordance were positively correlated with sample coverage. FreeBayes identified the majority of variants although it was characterised by variable sensitivity and precision in addition to a high false positive rate relative to the other minority variant callers and which varied with sample coverage. LoFreq was the most conservative caller. Conclusions: We conducted a performance and concordance evaluation of four minority variant calling tools used to identify and quantify low frequency variants. Inconsistency in the quality of sequenced samples impacts on sensitivity and accuracy of minority variant callers. Our study suggests that combining at least three tools when identifying minority variants is useful in filtering errors when calling low frequency variants.

Introduction RNA viruses have been described as a population of closely related sequences that arise from rapid genomic evolution coupled with a high replication and mutation rates (Domingo et al., 2012;Eigen et al., 1988;Holland et al., 1992). Genetic changes in RNA viruses result from genetic drift, erroneous replication processes, mutagenic agents and upon which natural selection acts (Moya et al., 2004). Rapid replication and mutations generate an ensemble of mutant genomes that are comprised of both dominant and low frequency variants. This diversity has been shown to affect virus fitness landscape, transmission, colonization and replication (Henn et al., 2012;Stack et al., 2013;Vignuzzi et al., 2006).
Many recent studies (Henn et al., 2012;Poon et al., 2016;Stack et al., 2013) have demonstrated the potential application of virus diversity to inform person-to-person transmission during virus outbreaks. A number of methods that incorporate both genomic and epidemiologic data to infer pathogen transmission have recently been developed (Worby et al., 2017). These approaches rely partly on the accurate detection and quantification of minority variant populations from genomic samples.
Several tools have been developed to identify and quantify minority variants from short-read data (Koboldt et al., 2009;Koboldt et al., 2012;Lai et al., 2016;Macalalad et al., 2012;Wilm et al., 2012;Yang et al., 2013). Nonetheless, these tools do not fully account for discrepancies that arise from sample collection, pre-processing and sequencing in addition to errors that are introduced during downstream bioinformatic analysis. Rigorous quality control in sample processing and analysis is often suggested to distinguish true biological variants from artefactual variants (Zhang & Flaherty, 2017). In some cases, sequencing errors can be reduced by developing high-fidelity protocols and laboratory quality control measures (Kinde et al., 2011;McCrone & Lauring, 2016;Watson et al., 2013). Additionally, the uncertainty resulting from random sequencing errors can be countered by sequencing larger populations at higher coverage (Zukurov et al., 2016). A number of studies have extensively explored variants from somatic or tumour samples (Hofmann et al., 2017;Koboldt et al., 2012;Krøigård et al., 2016;Lai et al., 2016;Pabinger et al., 2014) and their application in clinical genomics, but only a limited number of studies have explored the nature of variants from patient-derived samples that target viral populations (Henn et al., 2012;Macalalad et al., 2012;Wilm et al., 2012;Yang et al., 2013;Zukurov et al., 2016) and especially when calling variants from respiratory viruses such as the respiratory syncytial virus (RSV).
In this study, we evaluated four published minority variant detection tools using artificial short-read data with different error profiles. We explored the tools' ability to detect and quantify minority variants and assessed their overall agreement which we defined using two metrics, concordance accuracy, which measures the combined accuracy of the variant callers, and inter-caller concordance, which is the size of the largest set of variant callers that agree at each position. We show that concordance metrics are dependent on sample coverage and are influenced by the quality of input data.

Methods
Overall, we considered ten published, open-source tools with presumed ability to call minority variants from virus deep sequence data. A number of callers were excluded from the analysis for various reasons, for example, the GATK Haplo-typeCaller primarily targets germline calling from human and not variant calling from viral samples. We experienced technical difficulties in setting up the Platypus caller and even after setup, Platypus did not provide calls across all levels of coverage in our datasets. SAMtools mpileup did not provide direct allele frequencies while V-Phaser was superseded by V-Phaser 2 which has reported bugs and could not handle reads aligned with BWA-MEM. Therefore, the following four tools were evaluated, FreeBayes version 1.1.0-3-g961e5f3, LoFreq version 2.1.2, VarDict version 30.3.17 and VarScan version 2.4.2. A schematic diagram showing the overall approach is shown in Figure 1.

Artificial datasets
Artificial datasets were generated based on an RSV reference sequence (GenBank accession number KX510245.1) using ART-Illumina version 2.5.8 (Huang et al., 2012). ART-Illumina was took the reference RSV genome sequence as input and generated artificial reads using data derived error models to mimic sequence data. Each dataset comprised of eight samples with varying depth of coverage (20,50,100,500,1000,2000,5000,10000) and was generated using the methods described in Supplementary File 1, section S1.1. The first dataset did not incorporate an error profile. Error profile models (empirical error

Amendments from Version 1
This version of the manuscript has been revised to consider the reviewers' comments and suggestions based on the initial version 1 of the paper. We have modified the main text, the figures and removed two tables that contained information or data that could be described in the main text.
The main changes are: • Removed table 1 and table 2 from the main text and provided description of the same information in the main text.
• Revised table 3 to reflect proposed corrections.
• Revised Figure 4 to reflect amended changes.
• Added a new figure, Figure 5 as per reviewers' suggestion.
• Revised the initial R code to correct a bug that resulted in miscalculation of TP and TN for FreeBayes estimates.
• Revised the supplementary materials to include explicit methods used in the variant calling procedures.
These changes have been clarified further in responses to the reviewers' specific comments.

See referee reports
REVISED models based on the distribution of base quality scores) were generated from uncompressed FastQC raw reads derived from sequenced RSV whole genome samples, referred to as "good" and a "bad" sample based on FastQC metrics (Supplementary File 2 and Supplementary File 3) and used to generate artificial reads in dataset 2 and 3. ART-Illumina-generated artificial SAM files were converted to the BAM format, sorted and indexed using SAMtools version 1.3.1 for each dataset.
166 randomly and uniformly generated nucleotide mutations at different frequencies (Supplementary Table 1) were inserted into each of the artificial datasets using BamSurgeon (Ewing et al., 2015), such that a base change was made amongst the reads at each alignment position as described in Supplementary File 1, section S1.2. This process was repeated with a separate set of 155 positions that comprised a set of mutations with frequencies below 0.5 (Supplementary Table 1).

Variant calling
The BAM files from each of the three datasets were used as input to each of the four variant callers (FreeBayes, Lofreq, VarDict and VarScan2). The default parameter options used in each tool are explicitly provided in Supplementary File 4. All output files were provided in the variant call format (VCF) or as a tabular file for the case of VarDict. The output from the VCF and tabular file was parsed and written as a comma separated (CSV) file.

Performance measures
To evaluate the performance of the variant calling algorithms, we compared the sequence generated by each variant caller vc, denoted vc to the gold standard "spiked" sequence, denoted S true , at each of N=15205 nucleotide positions. The accuracy of each variant caller is the normalized Hamming distance from the gold standard sequence, y) is the standard discrete metric giving 1 when x=y, and 0 otherwise. By distinguishing between the sets of positions where variants did and did not occur in the gold standard sequence we calculated sensitivity, specificity, precision and accuracy (Table 1).

Concordance analysis
We defined two concordance metrics to present the level of agreement between different callers in detecting the same variant positions in the sequence. The first concordance metric is concordance accuracy, which measures the combined accuracy of the variant callers. At the true variant position i we then for each true variant position. The second concordance metric is inter-caller concordance, which is the size of the largest set of variant callers that agree at each position i, without reference to any gold standard sequence. We used both bar plots and heat maps to visualize the effect of coverage on C acc . Visualization of inter-caller concordance for variant sets was achieved using a bar plot and expounded by UpSet plots (Lex et al., 2014) in R version 3.4.2.

Results
We used three artificial datasets of varying coverage and error profile to assess the concordance accuracy and inter-caller concordance for four minority variant callers. The first dataset comprised of artificial reads based on an RSV genome, the second dataset comprised of the similar simulated set of reads whilst incorporating an error profile from the set of reads used to assemble the reference genome, the third dataset was generated using an error profile from a poorly sequenced sample. Overall, concordance accuracy improved with increase in sample coverage (Figure 2), and the proportion of positions that could not be identified by any variant caller decreased with increase in coverage (Supplementary Figure 1). For all the three datasets, and at each coverage level, fully concordant variants were below 50% of the total variants suggesting that considering only fully concordant positions eliminated a substantial number of variant positions. There were marginal improvements in the number of concordant variants in the second dataset compared to the first and the third error profile. Across all datasets, there was little improvement at detecting fully concordant positions after a coverage of 2000 ( Figure 2). We utilized UpSetR plots to provide a visual summary of the combination of variant callers that contributed to the observed concordance accuracy (Supplementary Figure 4).
FreeBayes identified the majority of variants ( Figure 3) across all the datasets although it was characterised by a substantial trade-off between sensitivity and precision in artificial dataset 1 ( Figure 4) in addition to a high false positive rate relative to the other minority variant callers observed in datasets 1 and 3. Regardless, Freebayes reported comparatively better sensitivity relative to the rest of the tools. Lofreq was the most conservative of the evaluated callers and it missed majority of variants across all the three datasets. In addition, Lofreq's sensitivity in coverages above 100 did not differ in a substantial way ( Figure 4). Vardict performance increased with read coverage but not by a great magnitude compared to FreeBayes and Varscan. Its performance across different datasets was more consistent relative to other tools. Overall, we observed lower reported frequency in called minority variants compared with spiked frequencies ( Figure 5). This observation was consistent in all the datasets from all the variant callers.

Discussion
Detecting and reporting minority variant calls is challenging, given that low frequency calls occur at a frequency that is the same as error generated from sequencing and PCR reactions. Recent studies have linked the sharing of minority variants with transmission patterns, and hence it is important to distinguish actual minor variants from spurious variant calls. Several minority variants callers use different detection algorithms and statistics, each of which attempt to optimize an aspect of the variant calling process. Therefore, there could be disparities between what is reported by a given minority variant caller, given datasets of varying sequencing depths and error profiles.
This study aimed to identify the proportion of positions that were recognized as variants by a set of tools using three artificial datasets of varying coverage and error profiles. Concordance accuracy and inter-caller concordance measures were dependent on the sample coverage and error profile.
Sensitivity for the majority of the tools was positively correlated with depth of coverage and similarly observed previously (Spencer et al., 2014) in a study that investigated performance in methods used to detect low-frequency variants. It is important to note that the tools provide different performance metrics depending on the variant's threshold (Supplementary File 5).  In the first artificial dataset, VarDict detected true positive variants with comparably good performance (sensitivity 43.4% -72.3%), though it was marginally invariant to changes in average coverage above 20. VarDict has in-built features that could contribute to its efficient performance. It is able to activate an "amplicon calling mode" that filters out amplicon biased variants and mispaired primers as PCR artefacts. A similar pattern was observed with LoFreq, where sensitivity was not significantly affected by depth of coverage. VarScan2 was more affected by coverage and maintained average sensitivity . The x-axis shows the sample coverage and the y-axis represents the sensitivity and precision respectively. Sensitivity of the callers rose gradually from low to high coverage samples. Again, precision was variable for FreeBayes calls while relatively high for the rest of the three callers. (6.6% -58.4%). Applying filters in VarScan has been reported to improve sensitivity by reducing number of false positives (Hofmann et al., 2017;Koboldt et al., 2013). FreeBayes' tradeoff between sensitivity and precision was also reported by other studies (Hwang et al., 2015;Sandmann et al., 2017). The use of caller-specified filters could have enhanced the sensitivity of some callers, but the option to adopt default parameters allows equivalent assessment of tool performance. Moreover, all possible combinations of tuning parameters are challenging, timeconsuming and sometimes impractical.
Based on artificial reads from the second dataset, FreeBayes performed comparatively better than the other tools with a very low false positive rate and better sensitivity (46.4% -94.6%). This suggests that FreeBayes is potentially useful in identifying minority variants when sample data comes from reads with a low error profile. This implies the error rate results are outcome of tool performance.
Specificity of a caller is its ability to correctly predict the absence of a variant. The variant callers make use of a high specificity to minimize the number of false positive calls thereby reducing post-call filtering and consequently filter out true low-frequency variants. Moreover, high accuracy measures demonstrate the reliability of the variant caller in correctly identifying true variants.
All the minority variant callers reported slightly lower frequencies in called variants compared to the frequencies in the original spiked variants ( Figure 5). This could be explained by the fact that many of the callers are tuned to report lower differences in the calls owing to stringent pre-processing criteria. A thorough investigation of this observation is therefore required.
In absence of an explicit error model from samples of heterogeneous sequencing quality, combining at least three tools when identifying minority variants could potentially assist in filtering out errors from low frequency variants. Given that there are no definitive data and next generation sequencing pipeline standards for variant calling approaches that are specific for viruses, there are opportunities to develop robust methods and tools that strike a balance between detecting errors and true minority variants from field virus samples that present with different sequencing quality.

Data availability
The data analysis scripts and datasets used in analysis are available from our institutional The funders had no role in the study design, data generation and analysis, decision to publish, or preparation of the manuscript.

Supplementary File 1: A description of data generation methods and command line tools used to create the artificial datasets.
Click here to access the data.

Supplementary File 2: FastQC metrics for sample used to generate simulated reads for the second dataset.
Click here to access the data.

Supplementary File 3: FastQC quality profile for sample used to generate simulated reads for the third dataset.
Click here to access the data.

Supplementary File 4: Tool-specific parameter settings for variant calling
Click here to access the data.
Click here to access the data. Click here to access the data.

Supplementary Tables 1-4.
Supplementary Table 1: 166 artificially generated nucleotide mutations with frequencies from 0-1. Table 2: 156 artificially generated nucleotide mutations with frequencies below 0.5.  Thank you for the detailed response and revision. The changes you've made greatly improve the paper and I especially appreciate the inclusion of run parameters for the tools and additional stratification plots.

Supplementary
These revisions address all the previous suggestions, and my only additional comment from seeing Figure 5 and the run parameters was that most of these tools are producing a high percentage of low frequency noise which will result in poor precision when trying to identify variants in real samples. A useful followup would be explore filtering or ensemble methods to help improve this specificity.
Thank you again for the work on this paper.
No competing interests were disclosed.

Competing Interests:
I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard. This is a comparative review of variant calling tools with focus on the detection of minority variant calls, specifically with reference to virus data. The authors have provided data on four freely available tools that can be tuned to call minority variants, or are specifically designed for this purpose. The data used here was artificial reads generated with the ART-Illumina tool, which were spiked in with variants at known locations. This was replicated eight times at levels of read coverage spanning four orders of magnitude. The outcome of the experiment was a low level of concordance between tools. None of the tools were able to identify all of the spiked-in variants.
The approach taken here is technically sound, as it is based on a strictly defined truth set, although I am slightly uncomfortable with the lack of real data (but equally I am aware of the methodological problems real data presents in this kind of scenario). The study is useful for the wider community as a) it reiterates the point that using a single tool for a given bioinformatics task comes with the risk of missing information and b) it provides concrete pointers as to which tools perform well in this scenario.
My main reservation is that there is no mention of the importance of parameter settings. Different parameter values can affect variant calling outcomes dramatically, even when just comparing different runs of the same tool. FreeBayes alone has over 70 command line parameters, many of these continuous variables. This makes for an incredibly large parameter space, and using different sets of parameter values can potentially have much more of an impact on the results than the choice of tool. This should at least be mentioned in the Discussion section.

Introduction
para 1 line 7: is the "and" redundant? para 5 line 7: comma after "callers" Methods para 1 line 1: "we considered eleven published,open-source tools" I counted six excluded tools in Table 1, plus four that have been taken through to evaluation = 10 para 2 ("Artificial datasets"): The concept of the error profile has not been well explained. This para 2 ("Artificial datasets"): The concept of the error profile has not been well explained. This section needs to be expanded. I would like to see some description of what the error profile consists of, how it has been derived from the FASTQC data, and how it has been applied to the reads in practice. para 3 line 2:"at varying frequencies" -what were they? I presume we are talking about the proportion of reads at a spiked-in variant site that carried the alternate allele at this site. This information strikes me as absolutely critical for this paper -this is about minority variants after all. Please provide this data. para 4 line 8: "written as a comma separated (CSV) files" should be "written as a comma separated (CSV) file" Results para 3 line 1: "FreeBayes identified majority" should be "FreeBayes identified the majority" para 3 line 7: "it missed majority" should be "it missed the majority" General comment: Vardict should get a mention in this section, as it actually had better recall rates than FreeBayes in 6 out of 8 samples.
Discussion para 1 line 5: change "ascertain" to "distinguish" para 3 line 1: "Sensitivity for majority" should be "Sensitivity for the majority" Table 3 As far as I can tell, the counts for FN + TP should always add up to 166. This is not the case for the majority of the Freebayes runs (except for Freebayes with samples 1 and 3, which are ok).

Supplementary File S1
To ensure reproducibility, please include the BAMSurgeon command line statement for spiking in the variants.

Supplementary Figure 2
It is not apparent what the individual figures/pages in the PDF file represent. Please label these appropriately.

If applicable, is the statistical analysis and its interpretation appropriate? Yes
Are all the source data underlying the results available to ensure full reproducibility? Yes Are the conclusions drawn adequately supported by the results?

Yes
No competing interests were disclosed.

Competing Interests:
I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above.
Author Response 13 Aug 2018 , KEMRI-Wellcome Trust Research Programme, Kenya George Githinji We are grateful for taking time to review this work and for providing useful comments that we have considered in this new draft of the manuscript.
We agree on the challenges of obtaining and using the appropriate actual data when conducting these types of analysis. We have done our best to generate reliable artificial datasets based on an actual RSV reference genome and error profiles from two sequenced samples using the methods described in this manuscript.
In regard to the importance of parameter settings and lack of data on how individual tools were parameterized, we have explicitly stated the parameter settings in the variant calling section and in the Supplementary File 4. In the discussion section, we highlight the challenge of exploring all possible parameter settings. The concept of the error profile has is now well explained in paragraph 1 of the artificial datasets section in the current manuscript. The actual frequencies used to spike the variants are now described and provided in samples to model different error rates, and evaluated the impact of differing coverage on sensitivity and specificity.
My main suggestion is to stratify comparisons by variant frequency to assess how callers do with different low frequency events, and I provide more specific comments below.

Methods suggestions
The stratifications by coverage, error rates and callers are great for identifying the effect of different callers on detection. The one missing component is the allele frequency of the generated variants. The paper does not define the range of frequencies generated, and this will have a large impact on the ability of detection across callers. Looking through the list of generated variants (Githinji_2018_variant.list.tab) the range varies from high frequency (90%+) to very low frequency (<1%). Since the focus is on low frequency detection, I'd suggest stratifying the results into bins: standard (>25%), moderately low frequency (5-25%) and low frequency (<5%) and examine the caller and depth detection within these bins. Pending the results of this, the authors may want to generate additional low frequency variants to help differentiate caller methods.
A confusing aspect of evaluating the depth metrics is that the number of possible true positives differs between sample depths. In Table 3, the FreeBayes TP + FNs are 166, 165 and 166 for 20x, 50x and 100x. However they change to 61, 55 and 55 for 500x, 1000x and 200x. Other callers seem to be consistent. Why does FreeBayes have different total variant numbers across depth? Table 1 should get compressed into a single paragraph saying that HaploptypeCaller, Platypus and mpileup target germline calling and not low frequency detection. It doesn't need to be a separate table. Table 2 should be replaced into a link to descriptions of true/false positives/negatives and does not need to be a separate figure in the paper.

Paper suggestions
The title should reflect that this paper describes viral calling (versus, say low frequency somatic variant detection).

Are sufficient details of methods and analysis provided to allow replication by others? Yes
If applicable, is the statistical analysis and its interpretation appropriate?

Not applicable
Are all the source data underlying the results available to ensure full reproducibility? Yes

Are the conclusions drawn adequately supported by the results?
Yes