ICR142 Benchmarker: evaluating, optimising and benchmarking variant calling using the ICR142 NGS validation series [version 1; peer review: 2 approved with reservations]

Evaluating, optimising and benchmarking of next generation sequencing (NGS) variant calling performance are essential requirements for clinical, commercial and academic NGS pipelines. Such assessments should be performed in a consistent, transparent and reproducible fashion, using independently, orthogonally generated data. Here we present ICR142 Benchmarker, a tool to generate outputs for assessing variant calling performance using the ICR142 NGS validation series, a dataset of exome sequence data from 142 samples together with Sanger sequence data at 704 sites. ICR142 Benchmarker provides summary and detailed information on the sensitivity, specificity and false detection rates of variant callers. ICR142 Benchmarker also automatically generates a single page report highlighting key performance metrics and how performance compares to widely-used open-source tools. We used ICR142 Benchmarker with VCF files outputted by GATK, OpEx and DeepVariant to create a benchmark for variant calling performance. This evaluation revealed pipeline-specific differences and shared challenges in variant calling, for example in detecting indels in short repeating sequence motifs. We next used ICR142 Benchmarker to perform regression testing with versions 0.5.2 and 0.6.1 of DeepVariant. This showed that v0.6.1 improves variant calling performance, but there was evidence of some minor changes in indel calling behaviour that may benefit from attention in future updates. The data also allowed us to evaluate filters to optimise DeepVariant calling, and we recommend using 30 as the QUAL threshold for base substitution calls when using DeepVariant v0.6.1. Open Peer Review


Introduction
Variant calling from next generation sequencing (NGS) data is a highly active area of bioinformatics, important to many clinical, commercial and academic applications. Several opensource tools are available and have been integrated into variant calling pipelines by many laboratories [1][2][3][4] . Commercial solutions and/or in-house proprietary tools are also increasingly being used by NGS analysis providers. Comparisons of the performance of different pipelines is limited 5,6 . This makes the evaluation, standardisation and regulation of NGS variant calling performance difficult 7 .
Assessment of variant calling performance is vital for improvement and optimisation of NGS variant calling. Comparative performance across pipelines is also of increasing importance, as the number of different analysis tools and providers continues to expand. The availability of benchmarking datasets with orthogonally confirmed positive and negative sites are required for optimal independent assessment of sensitivity, specificity and false detection rates (FDR).
We previously made available the ICR142 NGS validation series that includes NGS and Sanger data from 142 samples 8 . To construct the ICR142 NGS validation series we analysed exome sequence data from the 142 samples with multiple variant callers and undertook Sanger sequencing analysis at 704 sites to generate a dataset useful for systematic, transparent variant calling assessment and comparison 8 .
Here we present ICR142 Benchmarker 9 , a tool to generate outputs for assessing variant calling performance using the ICR142 NGS validation series. We used ICR142 Benchmarker with VCF files from GATK, OpEx and DeepVariant to provide guidance on expected variant caller performance compared to three open-source pipelines 1,10,11 . We then used ICR142 Benchmarker with VCF files from two commercial NGS variant calling providers, to provide comparison data to facilitate optimisation of their in-house pipelines, and to help give transparency of performance for their customers.

ICR142 NGS validation series
The ICR142 NGS validation series is a dataset that includes high-quality exome sequence data from 142 samples together with Sanger sequence data at 704 sites; 416 sites with variants and 288 sites at which variants were called by a variant caller, but no variant is present in the corresponding Sanger sequence 8 . In total, the ICR142 NGS validation series includes 704 sites, comprised of 123 base substitution variants, 293 insertions and/or deletion (indel) variants, 41 negative base substitution sites and 247 negative indel sites (Figure 1) 8 .
To determine if a variant was present we examined each Sanger sequence with Chromas software v2.13. For each site we selected an ENST from release 65 as the reference sequence. We analysed a region of interest of at least 100 base pairs (bp) of sequence flanking each variant site to allow for position/annotation errors. We considered a base substitution to be confirmed if the correct variant was called at the exact position and the variant base signal was accompanied by a corresponding reduction in the reference base signal. We considered an indel variant to be confirmed if an indel variant was present in the region of interest and the indel variant allele signal was present along the complete length of the region of interest.
We considered a site negative for a base substitution if the exact base substitution was not present. We considered a site negative for an indel if no indel was detected in the 200bp region of interest.

ICR142 Benchmarker
Implementation. ICR142 Benchmarker 9 is implemented as an easy-to-use tool for assessing variant calling performance using the ICR142 NGS validation series. The tool includes an analysis script and three supporting files: the Sanger data file, the Report template file and the descriptive ColumnHeadings. txt file. ICR142 Benchmarker provides a series of informative metrics with increasing levels of detail from overall calling performance to per site profiles together with a one page report summarising both standalone performance and comparative performance against widely-used open-source pipelines. ICR142 Benchmarker is implemented in R and is publically available at https://github.com/RahmanTeamDevelopment/ICR142_Benchmarker/releases. ICR142 Benchmarker requires an input file containing the paths to VCF version 4.X files. The VCF files must each represent a single sample. The script expects the ALT column to contain only one call. Any base substitution calls are expected to have REF and ALT values of length one, e.g. REF / ALT of GTCA / ATCA should be trimmed to G / A. Multi-sample VCF or gVCF files should be parsed to fulfil the above criteria.
At each site, ICR142 Benchmarker assesses both variant detection and accuracy of variant representation, with missing genotypes allowed. Base substitution variants are both detected and accurately represented if the correct variant is called at the exact position. If an incorrect base substitution is called at that position it is considered a missed variant. For negative base substitution sites a false positive base substitution call is assigned if any base substitution call is made at the exact position. Due to the more complex nature of indel detection and representation, a stringent exact matching approach is not appropriate. We thus report indel detection as the number of indel calls within a 200bp window centred on the site position, for both true indel and negative indel sites. An indel variant is considered to be both detected and accurately represented if an exact match is found. For negative indel sites a false indel call is assigned if the indel detection value is greater than 0. Summary metrics are calculated from the detection values. Any missing values are treated as 'no call' in the metric calculations.
ICR142 Benchmarker generates five output files, four tabseparated .txt files and one Word document .docx. The Summary. txt file provides summary performance metrics for the evaluated method, specifically the overall sensitivity, specificity and false detection rate (FDR) values. These three metrics are also separately calculated for base substitutions or indels. The FullResults.txt file contains all of the Sanger validation information from the ICR142 dataset and information on sitespecific performance at each of the 704 sites. The FalsePositives. txt and TruePositives.txt files contain the relevant lines of the input VCF files for false positive and true positive variant calls, respectively. Detailed description of all columns in the .txt files is provided in the ColumnHeaders.txt supporting file.
The Report.docx file provides a summary variant calling analysis report of performance using the ICR142 dataset. This single page document is directly constructed from the Summary. txt and FullResults.txt files and thus is transparent and reproducible. Key points from the detailed outputs are highlighted to the user, including information about performance compared to widely-used open-source variant callers.
Operation. ICR142 Benchmarker can be installed by running a simple Bash script. Installation requires R version 3.1.2 or later and a capacity to build packages from source. ICR142 Benchmarker implements full version control using packrat 12 . This approach ensures ICR142 Benchmarker implementation will not be affected by future changes of incorporated packages or their dependencies. Once installed, the tool can be run from a Linux/Unix command line. The ICR142 Benchmarker documentation is available at GitHub: https://github.com/Rahman-TeamDevelopment/ICR142_Benchmarker/.

Assessing variant calling performance
To evaluate the utility of the ICR142 NGS validation series and ICR142 Benchmarker we analysed data from three different open-source variant callers, GATK, OpEx and DeepVariant 1,10,11 and two commercial variant callers from Company A and Company B.
To generate BAM files for the GATK analysis, we aligned the ICR142 FASTQ files with BWA-MEM v0.7.12 and removed duplicates using Picard v1.129 13 . We ran a GATK v3.4-46 analysis on the 142 BAM files to create a multi-sample VCF file (Supplementary File 1). We then applied standard additional filters of AB > 0.2, DP ≥ 10 and GQ ≥ 20 and included the remaining variants as the GATK set.
We ran OpEx v1.0.0, which uses Platypus v0.1.5 as its variant caller, with default settings to generate 142 single sample VCF output files from the ICR142 FASTQ files 11 . Variants flagged as "high" by OpEx were included as the OpEx set.
Two commercial variant calling providers, referred to as Company A and Company B, supplied data for the ICR142 validation series, Company A supplied 142 individual VCF files and Company B supplied a multi-sample gVCF file.
We pre-processed all multi-sample and gVCF files to ensure compatibility with the script. Multi-sample files were split into 142 single sample files with the vcf-subset command in vcftools v0.1.14 14 . For each gVCF file, we analysed the variant call subset to generate an initial result for each site. The reference call subset was then used to assign a missing value at sites with no call.

Variant detection performance
We used the data from GATK, OpEx and DeepVariant v0.6.1 to provide a baseline for expected variant calling performance (Table 1 and Supplementary File 2). Comparison of the three pipelines showed concordance at 92% of sites, both positive and negative. Because the same alignment files were used for both OpEx and DeepVariant (BWA), with a related but different aligner used for GATK (BWA-MEM), one might have expected the OpEx and DeepVariant results to be more similar to one another than to GATK. However, the aligner used did not seem to have a strong impact, as GATK and DeepVariant showed similar performance, while OpEx had a better false detection rate but lower sensitivity. This was expected, as the OpEx "high" quality filter was designed to achieve exactly this balance for its first-pass exome sequence analysis 11 .
Sites where all three pipelines called false positives highlight common challenges in variant calling. False positive base substitutions in POTEH and CHEK2 are likely due to non-specific capture of sequences derived from homologous genomic sequences. For example, the region surrounding the false positive position on chromosome 22 in POTEH has 90-95% homology with other paralogs of the POTE gene, with the allele at the exact position varying between them. False positive indels called in MUC13 and SLC39A14 provide examples of the challenges of variant calling in short repeating sequence motifs in NGS data. Although it is possible that the Sanger result is a false negative at these sites, we consider this to be unlikely.
The discordant false positive values reveal pipeline-specific differences in variant calling performance. GATK had seven unique sites with false positives, i.e. not called by either OpEx or DeepVariant, all of which were indels. These included two sites with a long insertion (SiteIDs 31, 290) and one site with a cluster of multiple calls (SiteID 75) (Supplementary File 2). Within the cluster, there would be no overall change in length if one could assume that all calls occurred on the same allele. However, phasing information was not provided by GATK until v3.3, and is only run automatically under specific conditions in more recent versions. Users of GATK should be cautious when multiple indels are called in close proximity in the same sample and consider visual inspection of the BAM file to check phasing, if phasing was not performed automatically. DeepVariant had four unique sites with false positives, all of which were base substitutions with QUAL value ranging from 9.3-20.3 15 . OpEx did not have any unique sites with false positives, consistent with the priority of the high quality OpEx filter to limit the false detection rate.
Indel detection and representation accuracy ICR142 Benchmarker makes a distinction between variant detection and accurate variant representation since indels detected by NGS are often validated by an orthogonal technique such as Sanger sequencing, and the call amended if required. There were ten variants which were detected by all three pipelines but not correctly represented by at least one pipeline. Nine variants were incorrectly represented by all three pipelines. These were all complex indels, indicating the need for improvement or further standardisation in the representation of this important class of variant. The final variant, an inframe deletion of 24bp in GPRIN1 (SiteID 607), was correctly represented by OpEx but both GATK and DeepVariant represented this variant as two separate frameshifting deletions of 13bp and 11bp. This is a crucial difference, as the functional impact of inframe and frameshifting variants is often markedly different. Excluding complex indels, all three methods had greater than 98% accuracy, only OpEx achieved 100% accuracy, with all of the 264 detected insertions or deletions correctly represented.
Utility in variant calling regression testing Variant calling pipelines are frequently updated. Regression testing ensures previously developed and tested pipelines still

Utility in creation of variant detection filters
Many variant callers apply filters of the raw calls to improve performance. We believe the ICR142 validation series and ICR142 Benchmarker can be used to inform optimal filter creation.
To evaluate this we investigated the performance of DeepVariant v0.6.1. We found that while sensitivity was excellent for both base substitutions and indels, specificity was surprisingly low for base substitutions at 85% (Table 1). We looked at the quality information returned by DeepVariant for all base substitution positive and negative sites in the ICR142 series 15 . We found that imposing a threshold of 30 on the QUAL column for base substitution calls reduced the false detection rate from 5% to 2%, increased the specificity to 95%, and did not greatly reduce the sensitivity, as only one variant was excluded, which had a QUAL value of 29.3, resulting in a sensitivity of 99%. We thus recommend using a filter of QUAL threshold of 30 for base substitution calls when using DeepVariant v0.6.1.

Benchmarking variant detection performance
Using the concordant data from the open-source pipelines allowed us to describe the expected baseline performance for variant calling (

Conclusion
Evaluation, optimisation and benchmarking of performance, including comparison with current widely-used pipelines, is essential for clinical, commercial and academic NGS variant calling applications. We have developed a tool, ICR142 Benchmarker,

Grant information
The work was supported by the Wellcome Trust [200990].

Acknowledgements
We acknowledge support from the NIHR RM/ICR Specialist Biomedical Research Centre for Cancer. This work was undertaken as part of the Transforming Genetic Medicine Initiative (www.thetgmi.org).

Supplementary File 1. GATK analysis commands
Click here to access the data

Supplementary File 2. Site-specific ICR142 Benchmarker results for GATK, OpEx and DeepVariant
Click here to access the data The description of the column headings are given below:

Gene -HGNC symbol
SangerCall -the most 3' representation annotated with CSN v1.0 16 or "No" if no variant present Type -"bs", "del", "ins", "complex", or "indel" for base substitutions, simple deletions, simple insertions, complex indels, or negative indel sites, respectively It would be interesting to discuss how the benchmarking results for one or more of the aligner/variant calling combinations compares to other benchmarking datasets. For instance on high-confidence variants in a Genome In A Bottle dataset. 3.

Minor points
The title suggests that variant calling in general is evaluated. However, only germline SNV and Indel variants are taken into account. Copy number variant calls or somatic/mosaic variants are not benchmarked. The types of variants evaluated should be specified more clearly.
Page 3: Introduction: we agree with the comment of reviewer 1 that evaluation of published pipelines is not limited.

Method:
The selection criteria of the 704 variant sites can be stated more clearly.  Table 1 should include the used aligner as well as the variant caller in the column headers. As is stated correctly in the methods and results section, the variant callers aren't isolated units and the aligner could influence the result (although the results show this is not the case here). A general remark: the data are not easy to read and understand.
Page 5: "regression testing" should be explained, also how to perform.
Page 5/6: It would be interesting to discuss if the 4% FDR in companies A and B and DeepVariant are constituted by the same variants/regions and how this compares to the 2% and 5% of OpEx and GATK.
Page 6: It is stated that the information of Companies A and B is shown in table 1. This is not the case.

Page 6: utility in creation of variant detection filters:
The filter setting to improve performance was tested for DeepVariant. What about the other caller? A more systematic approach is required.
We could not locate the 3 Vcf files to reproduce the output.

Is the rationale for developing the new software tool clearly explained? Partly
Is the description of the software tool technically sound?

Is sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Yes
Are the conclusions about the tool and its performance adequately supported by the findings presented in the article? Partly Competing Interests: No competing interests were disclosed.
We confirm that we have read this submission and believe that we have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however we have significant reservations, as outlined above.

Author Response 25 Oct 2018
Nazneen Rahman, The Institute of Cancer Research, London, UK

General Response
We thank the reviewers for their comments. We believe they may have a slight misunderstanding of our intentions in writing this paper and in making ICR142 Benchmarker. In 2016 we made the ICR142 dataset available (Ruark et al10.12688/f1000research.8219.2). We did this simply because we had been using the dataset in-house and had found it useful and we thought others might also find it useful. Many have -the paper has been read >1000x and downloaded >200x. Following discussions and feedback with ICR142 users we made ICR142 Benchmarker. We believe/hope it makes using the ICR142 dataset easier and more useful.
We have included some simple use cases in this paper to highlight potential uses, but naturally there will be situations where ICR142 and hence ICR142 Benchmarker will not be suitable.

Response to reviewers comments:
Major points Table 1 and in general: It should be specified that the benchmarker ICR142 is only for Illumina data, enriched with a TruSeq kit. It is mentioned that the values are calculated using the ICR142 dataset. However, the use of the terms sensitivity and specificity imply that these numbers are generalizable to the entire exome. However, the sites are selected using data Illumina TruSeq Exome enrichment procedure in ○ combination with the Illumina HiSeq 2000 sequencing. Difficult positions could differ for different enrichment procedures or sequencing platforms, even between Illumina machines. A variant caller may for instance have a good performance on an Illumina dataset but less on an IonTorrent dataset. Response: Thank you. We have added to the abstract, paper and Table 1 legend that these are Illumina data. In the paper, we have added 'The exome sequence data was generated using the Illumina TruSeq Exome and a HiSeq2000 sequencer. Full details of the ICR142 series are given in Ruark et al'. ICR142 data can, and has, been used with data from other capture kits. All the ICR142 sites have been orthogonally validated so the results are not dependent on the enrichment process or sequencer. Indeed we believe that the ICR142 dataset and ICR142 Benchmarker have particular utility in helping to uncover this type of performance variability.
The title suggests that variant calling is optimised. However, the manuscript does not discuss optimization of variant calling. It only states that it can be done. In my opinion this is not enough to warrant the statement in the title.

○
Response: Thank you. We had inadvertently omitted the word 'performance' from the title, which we have now added. It would be interesting to further discuss reasons for lower sensitivity and specificity. Only in the Deep Variant tool settings were changed. A more systematic approach for all the tools would be useful. Are there clusters of (types of) variants being missed or called together at different settings. Such a discussion could warrant the optimisation claim.
○ Response: Thank you. These are exactly the types of questions we hope people will find ICR142 Benchmarker useful in highlighting, evaluating and optimizing. We have extensively used ICR142 dataset in the optimization of OpEx, as described in the OpEx paper. It would be interesting to discuss how the benchmarking results for one or more of the aligner/variant calling combinations compares to other benchmarking datasets. For instance on high-confidence variants in a Genome In A Bottle dataset. ○ Response: Thank you. We agree these types of comparisons would be interesting. We hope that people will be inclined to do them, and as more people use (and hopefully make available) their ICR142 Benchmarker outputs we believe there may be several opportunities for interesting comparisons.

Response:
We made ICR142 Benchmarker available following interactions with users of the ICR142 dataset, to enhance its usability and potential utility. We are pleased you can see possible advantages compared with other datasets / frameworks and we would be delighted if someone wanted to evaluate this more formally, but as you say that wasn't the purpose of this paper. Overall, we personally favour using as many as possible -one almost always gains something from such evaluations. I am not sure I quite understand how InDels are handled. Does Benchmarker compare just the size of the insertion/deletion (at a given position)? Are InDels leftaligned as part of the VCF parsing step, or is this task left to the user? ○ Response: We considered detection and annotation/representation of indels separately, because there remains considerable variation in how the same indel can be described. If an indel was called within a 200bp window centred on the site position we considered an indel to have been detected and it is counted as a Positive indel call. It is possible that the annotation/representation of the indel might differ. Based on the GitHub repository it looks like the assessment is limited to hg19. Could benchmark data for GRChr37 and, more importantly, GRCh38 be made available? If not, worth mentioning in the paper.

○
Response: Thank you. Great point. We have updated ICR142 Benchmarker so this is possible. In the paper we have added 'It can be used with hg19/GRCh37 or hg38/GRCh38 data.' Could DeepVariant's low base substitution specificity be due to working with Stampygenerated alignments rather than bwa-mem alignment? I realize the assessment of callers is not the focus of this paper, but this might still be worth testing.

○
Response: Thank you. This is exactly the type of question we hoped ICR142 Benchmarker might stimulate, though it is not a focus of our work to take it any further. In this paper we simply wanted to highlight some possible use cases, to give an idea of how people might find the ICR142 series and ICR12 Benchmarker useful. It might be helpful to have a sample (Word) report in the GitHub repository.
○ Response: Thank you. We have updated the GitHub repository to include a sample (Word) report.

Competing Interests:
No competing interests were disclosed.