Unaccounted uncertainty from qPCR efficiency estimates entails uncontrolled false positive rates

Accurate adjustment for the amplification efficiency (AE) is an important part of real-time quantitative polymerase chain reaction (qPCR) experiments. The most commonly used correction strategy is to estimate the AE by dilution experiments and use this as a plug-in when efficiency correcting the ΔΔCq. Currently, it is recommended to determine the AE with high precision as this plug-in approach does not account for the AE uncertainty, implicitly assuming an infinitely precise AE estimate. Determining the AE with such precision, however, requires tedious laboratory work and vast amounts of biological material. Violation of the assumption leads to overly optimistic standard errors of the ΔΔCq, confidence intervals, and p-values which ultimately increase the type I error rate beyond the expected significance level. As qPCR is often used for validation it should be a high priority to account for the uncertainty of the AE estimate and thereby properly bounding the type I error rate and achieve the desired significance level. We suggest and benchmark different methods to obtain the standard error of the efficiency adjusted ΔΔCq using the statistical delta method, Monte Carlo integration, or bootstrapping. Our suggested methods are founded in a linear mixed effects model (LMM) framework, but the problem and ideas apply in all qPCR experiments. The methods and impact of the AE uncertainty are illustrated in three qPCR applications and a simulation study. In addition, we validate findings suggesting that MGST1 is differentially expressed between high and low abundance culture initiating cells in multiple myeloma and that microRNA-127 is differentially expressed between testicular and nodal lymphomas. We conclude, that the commonly used efficiency corrected quantities disregard the uncertainty of the AE, which can drastically impact the standard error and lead to increased false positive rates. Our suggestions show that it is possible to easily perform statistical inference of ΔΔCq, whilst properly accounting for the AE uncertainty and better controlling the false positive rate.


Background
Despite being an aging technique, real-time quantitative polymerase chain reaction (qPCR)-arguably one of the most significant biotech discoveries of all time-is still heavily used in molecular biology [1]. qPCR is an extremely sensitive and cost effective technique to amplify and quantitate the abundance of DNA and RNA using a Taq polymerase that for RNA analysis are preceded by a reverse transcriptase conversion into template DNA. In life sciences, qPCR is typically applied to quantify candidate gene transcripts that are biomarkers of diagnostic, prognostic, and even predictive value in e.g. infectious diseases and cancer. In the slip stream of high-volume omics-data, another very important application of qPCR has arisen. Here, qPCR is the gold standard validation tool for the most promising gene transcripts generated by high-throughput screening studies such as microarrays or sequencing. For validation experiments in particular the ability to control the type I error rate is very important. Unfortunately, important statistical details are often omitted resulting in a failure to obtain the desired type I error probability. Validation without such an ability cannot be considered very meaningful and therefore conservative approaches should be taken.
The so-called C q quantity is the normalized relative expression of a target gene of interest between treated (case) and untreated samples (control) accounting for undesired variations using one or more endogenous reference genes (also called housekeeping gene) assumed to be approximately unchanged due to the treatment. The C q -value is usually based on the assumption of perfect AEs for both the target and reference gene. However, the target and reference genes might be subject to different AE which yield biased C q -values. In turn, the C q has been modified to AE corrected versions [2][3][4].
Despite the tremendous success of qPCR, 'statistical inference considerations are still not accorded high enough priority' [5,6]. We find this particular true for the estimation of the AE. Although efficiency calibration has been extensively treated by [2] or in the more generalized model by [7], there seems to be a lack of systematic studies of the unavoidable influence of the uncertainty of the AE estimate on the conclusions of qPCR experiments based on formal statistical inference. The current AE adjusted C q methods do not account for the uncertainty of the estimated AE and thus effectively assumes the AE to be estimated with infinite precision. This assumption entails a systematic underestimation of the standard error of C q leading to too narrow confidence intervals, decreased p-values, and thereby increased type I error rates. If the AE is poorly determined, this underestimation can drastically increase the standard error of C q and similar quantities.
Recently, some effort has been devoted to studying error propagation in qPCR [8][9][10][11]. Nordgaard et al. [8] studied error propagation primarily on the C q -values including the effect of the AE uncertainty. This study was, however, statistically informal and made no attempt to quantify the effect on the C q and inference hereof. Furthermore, they [8] considered AE estimation from the amplification curve (thus for each well) and not from separate dilution experiments. Tellinghuisen and Speiss [9][10][11] stressed and discussed the importance and negative impact of improper error handling, including AE estimation, although again with emphasis on determining C q -values and the florescence level at the hypothetical cycle zero using different methods. In this paper, we explicitly discuss only the AE estimation from dilution curves, which assumes a constant AE across certain genes. While this assumption has been contested and alternatives by branching processes suggested [12][13][14][15], the problem still exist as AE estimates from individual amplification curves also have an associated error which affect all 'down-steam' estimated quantities and inference. The numerous estimated well-specific AEs arguably amplify the problem as even more errors-one for each well-is propagated further on.
The work by Svec et al. [16] also recently assessed the impact of AE uncertainty as a function of the number of technical replicates at each concentration and the qPCR instrument. They conclude that a minimum of 3-4 replicates at each concentration are needed and that a significant inter qPCR instrument effect is present. However, they do not gauge the effect of the number of concentrations used-an important variable as additional technical replicates rarely contribute with much information to determine the AE. Nonetheless, Svec et al. [16] also do not address the impact of AE uncertainty on formal statistical inference on the C q , as this paper intends.

Aims
Primarily, we aim to highlight the common problem of disregarding the uncertainty of the AE estimate in statistical inference of the C q -value in qPCR experiments. And we propose and benchmark different off-the-shelf and novel solutions to this problem.
To this end, we employ a statistical model which allows such formal inference. This covers statistical model formulation, confidence intervals, hypothesis testing, and power calculation, with special emphasis on false positive rates. Simultaneous estimation of the uncertainty of the AE estimate and mean C q -values by linear mixed effects models (LMM), which allows a more appropriate handling of the technical and sample errors, is described. We investigate the use of the statistical delta method, Monte Carlo integration, or bootstrapping to correctly perform inference on the value of C q .
Note two important observations: First, the problem exists for all statistical models and methods which incorrectly disregard the uncertainty of the AE estimate and is not limited to LMMs. Secondly, the problem exists not only for C q -values, but also all similar quantities, e.g. C q and C q , and the statistical inferences based on these. The idea of using LMMs for qPCR experiments is not new [17][18][19][20][21]. For example, [17] and [18] have used mixed effects modeling to identify candidate normalizing genes. The work by [19] applied the related generalized estimation equations to handle intra and inter group variation. However, the usage of LMMs combined with the statistical delta method, Monte Carlo integration, or bootstrapping to handle uncertainty stemming from the efficiency estimation seems to be novel and provides a general statistical framework for qPCR experiments and may be considered an extension of the strategy by [7]. Others use the mixed models primarily for the C q -value estimation [20,21].
We demonstrate that considering the uncertainty of the AE is, unsurprisingly, highly important when the AE is determined with inadequate precision and vice versa. We do so by three application examples and a simulation experiment. In the first two applications, the consideration of the AE uncertainty is largely unimportant for C q inference due to a large number of dilution steps and well-determined AE. In the last application, we see that the AE uncertainties have a drastically different impact on C q inference. In a simulation study, we show that the methods proposed indeed control the false positive rate better than the conventional approach and provide further insight into the problem.
In the first application, we also verify that multiple myeloma cancer cell lines differentially express the MGST1 gene depending on the abundance of culture initiating cells. In the second application, the approaches are also used to design and analyze a study which results turned out to support the hypothesis of [22] that miRNA-127 is differentially expressed between testicular and nodal DLBCL.

Observational model
In order to approximate the standard error of the AE adjusted C q we model the amplification process in the following way where F C q is the fluorescence measurement at the C q 'th cycle, κ is a sample-specific proportionality constant, N 0 is the number of transcripts of interest in the initial sample before amplification, and 2 α is the AE from which α is interpreted as the percentage growth on the log scale. In practice, the transcript abundance level is determined by the cycle C q for which a given fluorescence measurement F C q is reached. Please note, by sample-specific we mean inter sample variations, like pipetting error, which should be accounted for by the reference genes. We rearrange (1) and notice that C q can be expressed as αC q = log 2 F C q − log 2 κN 0 . In order to estimate the relative amount of target (tgt) gene transcripts between case and control (ctrl) samples, we assume the amount of the reference (ref ) gene template is the same in both the case and the control, N 0,ref,case = N 0,ref,ctrl , and that the AE only vary between the target and reference gene. We then arrive at the following expression for the log 2 -fold change of the target gene template between case and controls: assuming that the C q -values have been determined by a common florescence level F C q . This method of estimating the log relative abundance between case and controls is often referred to as the C q -method [23], after the double difference appearing in the expression: Thus we have 2 − C q as the relative abundance of the original target transcript corrected for the AE.

Statistical model
We study the problematic aspects of ignoring the uncertainty of the AE estimate. Note, however, that this problem persists for all statistical models and methods which naïvely 'plug-in' the AE estimate from dilution curves into formulae concerning the C q . For ease of notation we use the abbreviations i ∈ {tgt, ref} for gene types target and reference; j ∈ {case, ctrl, std} for sample types case, control, and standard curve; s ∈ {1, . . . , n ij } for samples in the ij'th group; and k ∈ {0, . . . , K ijs } for dilution steps for each sample. To estimate C q of (2), estimates of α i are needed. A popular way of estimating the AE is by ordinary linear regression. I.e. by regressing C q,ij against a series of increasing values 0 = x 1 < · · · < x K , defined by N 0,ijk = N 0,ij 2 −x k , and naïvely plugginĝ α i into (2) and thus disregarding its uncertainty. Here, k denotes the dilution step and x k the number of 2fold dilutions (e.g. x 1 = 1 means the first dilution step halves the original concentration). The C q,ij -values and α i can then be estimated simultaneously when formulated as a LMM [24]; where μ ij is the group means, A js is a random effect from sample s under the j'th sample type, and γ i = α −1 i is the inverse log 2 -AE. That is, The random effects A js of (3) are N 0, σ 2 S -distributed and the error terms ijsk are independent and N 0, σ 2 jdistributed with a sample type specific variance σ 2 j . The random effects account for the paired samples across tgt/ref for each j. LMMs provide a more correct quantification of the sources of variation and thereby a more correct estimate of the uncertainty of μ ij and their derived quantities.
In one application we shall relax the assumption that the AE is independent of j and consider group-specific AEs α ij = γ −1 ij . Although, variation due to technical replicates should be modeled in (3) as an additional random effect term, we average out technical replicates for simplicity. For further simplicity of this paper, we refrained from using multiple reference genes simultaneously in the C q estimation although our the framework and methods easily extends to this case.

Inference for C q by the delta method and Monte Carlo integration
We first consider hypothesis testing and confidence intervals for C q by the statistical delta method. Let the maximum likelihood estimates of the fixed effects . We wish to test the hypothesis H 0 : c(θ) = 0, where c is the continuously differentiable function of the fixed effects given by The main task of this paper is to approximate the standard error of c(θ) and thereby account for the uncertainty of C q . That is, the standard error, is of central interest. The standard error is used in the statistic for testing H 0 given by t = c(θ)/se(θ ). From a first order Taylor expansion of c around θ , where Var[θ ] is the variance-covariance matrix and ∇c(θ ) is the gradient vector, the variance can be obtained by Notice, this expression coincides with the formula for error propagation in Tellinghuisen and Speiss ( [10], p. 95). Hence we approximate t by According to ([24], Section 2.4.2), t is approximately tdistributed with η degrees of freedom. The degrees of freedom of multilevel mixed effects models are non-trivial to obtain in general. We do not pursue this further and restrict ourselves to the case of balanced experimental designs where η is obtained relatively straight-forwardly.
On the basis of (6), an approximate (1 − α)100 % confidence interval of c(θ) can then be given by

Likewise, p-values can be obtained by computing
Alternatively to (6), the variance Var c(θ) can be evaluated by Monte Carlo integration. One way is to simulate a large number N of parameters θ 1 , . . . , θ N from a multivariate normal distribution using the estimated parameters N 6 (θ , Var[θ ] ) and compute the sample variance of c(θ 1 ), . . . , c(θ N ).
Both maximum likelihood (ML) and restricted maximum likelihood estimation (REML) of LMMs is implemented in the R-packages lme4 and nlme [24,25]. The packages readily provides the estimateθ and Var[θ] and we use these in the construction of test and confidence intervals for the C q as described above. The needed gradient in (6) is computed straight-forwardly from (4).
We note that the division by γ j in (4) is problematic asγ j is normally distributed and values near zero can increase the variance dramatic. In practice, this is only problematic if the standard error ofγ j is sufficiently large. One way to solve this problem is to use the log 2 concentration as the response and the C q -values as the explanatory variables in a regression model of the standard curve to estimate α j directly. This approach is not without conceptual problems as this puts the errors on the explanatory variables. To this end, note that the hypothesis H 0 : γ tgt γ ref c(θ) = 0, can be equivalently tested for which the standard error of the test-statistic can be worked out exactly.
If γ −1 tgt and γ −1 ref are assumed to be one (or otherwise known) then (4) becomes a simple linear hypothesis for which the standard error is easily calculated. This corresponds to leaving out the terms in (3) involving these parameters and thus ignoring dilution data. If γ −1 tgt = γ −1 ref = 1 is assumed, we shall refer to the obtained estimate as the naïve LMM. If γ −1 tgt and γ −1 ref are assumed known (i.e. disregarding the standard error hereof ) we refer to the obtained estimate as the efficiency corrected (EC) estimate. The estimate where the uncertainty of the AE is considered is referred to as efficiency corrected and variance adjusted by either the delta method (EC&VA1) or Monte Carlo integration (EC&VA2).

Inference for C q by the bootstrap method
We now consider hypothesis testing and confidence intervals for C q by bootstrapping as an alternative approach. The bootstrap, which avoids calculating gradients, is often cited to perform better in small sample situations [26].
The basic idea of the bootstrap is that inference on C q can be conducted by re-sampling the sample data with replacement to obtain an approximate sampling distribution of the statistic and thereby its properties. In the usual qPCR setup with paired samples and dilution data, straight-forward bootstrapping will quickly fail. We propose non-parametric block bootstrap samples for the case-control data generated by sampling matched pairs of tgt/ref genes with replacement for cases and controls, respectively. However, as we have only got a single observation for each dilution step we chose to re-sample residuals from a simple linear regression model and subsequently adding the residuals to the fitted values from the linear regression. Hence the B bootstrapped datasets consists of the re-sampled matched pairs and the residual bootstrapped standard curve. For each dataset, are computed to obtain the bootstrap distribution from which confidence intervals and p-values can be obtained. The standard error of C q is estimated by the sample standard deviation of the bootstrap distribution.
While the bootstrap is an intuitive and excellent method for estimating the standard error, it quickly becomes computationally heavy. The rather complicated designs of qPCR experiments with paired samples, dilution data, and other random effects also makes the bootstrap less attractive as good bootstrap sampling schemes are hard to produce.
Alternatively, parametric bootstrap can be used by simulating datasets from the fitted model. Here, both new random effects and noise terms are realized and added to the fitted values to generate new datasets.
Re-sampling methods for qPCR data have previously been proposed by [27] to infer the relative expression directly by permutation testing. Unlike the permutation testing of [27], the bootstrap is here used to estimate the mean and standard error of C q and not directly test the stated hypothesis. The bootstrap approach suggested here also allows for constructing confidence intervals.

Applications
We applied the described approaches to two qPCR validation experiments regarding culture initiating cells (CICs) in multiple myeloma (MM) and non-coding microRNAs in diffuse large B-cell lymphoma (DLBCL). In both experiments, the C q -values were extracted for both the reference and target transcripts with automatic baseline and threshold selection [28]. We also illustrate the method on a public available qPCR dataset concerning the differential gene expression in arabidopsis thaliana grown under different conditions. In order to gauge the performance of the methods we subsequently performed a simulation study.

CIC study Introduction
A cell is culture initiating if it can initiate a sustained production of cells when cultured in vitro. The viability potential of a cell population can be assessed by measuring the number of culture initiating cells (CICs). This number can be estimated by a dilution experiment where cells are seeded in decreasing numbers. The ratio of CICs can then be estimated by e.g. Poisson regression [29]. CICs are of particular interest in cancer research as cancers with high culture initiating potential seemingly have stem cell like properties making them resistant towards chemotherapy [30].
In search for genes associated with a high culture initiating potential in MM we made limiting dilution experiments of 14 MM cell lines and divided them into 7 cell lines with low and 7 cell lines with high culture initiating potential. Gene expression profiling by microarrays identified genes MGST1 and MMSET to be differentially expressed between cell lines with high and low abundance of CICs. As gene expression detection by microarrays can be hampered by high false positive rates, the purpose of this experiment was to validate the findings of the association of MGST1 and MMSET with culture initiating potential by qPCR. KAS-6-1, LP-1, MOLP-2, NCI-H929, OPM-2, SK-MM-2, U-266) with < 1 % CICs were used. The fraction of CICs was determined by the limiting dilution method, see [29]. Total RNA was isolated from frozen cell culture pellets, using a combined method of Trizol (Invitrogen) and Mirvana spin columns (Ambion). Isolated RNA was reversed transcribed into complementary DNA (cDNA) synthesis using SuperScript III First-Strand Synthesis Supermix (Invitrogen). As input into the total cDNA synthesis of 250 ng total RNA was used. Equal amounts of random hexamers and oligo(dT) were used as primers. Quantitative real-time reverse transcriptase polymerase chain reaction was performed on a Mx3000p qPCR system (Agilent Technologies/Stratgene) using the TaqMan UniversalPCR Master Mix, No AmpErase UNG, and TaqMan gene expression Assays (Applied Biosystems). The following TaqMan Gene Expression Assays were used (Assay ID numbers in parentheses): MGST1 (Hs00220393_m1), MMSET (Hs00983716_m1). The two reference genes beta-actin (ACTB) and GAPDH were used as endogenous controls, assay IDs 4333762-0912030 and 4333764-1111036, respectively. For each target and reference transcripts a standard curve based on seven 2-fold dilutions was constructed on a reference sample consisting of material from the AMO-1 cell line.

DLBCL study Introduction
The association between oncogenesis and micro RNAs (miRNAs), short non-coding RNA transcripts with regulatory capabilities, has recently prompted an immense research activity. The possibility to change treatment strategies by transfecting antisense oligonucleotide to control abnormally up-regulated miRNAs in malignant tissue is of particular interest [31]. In that respect upregulated miR-127 and miR-143 in testicular DLBCL have shown treatment changing potential [22]. However, as the number of screened miRNAs was high and the sample size was low in Robertus et al. 's work invoking high risk of false discoveries we set out to validate the differential expression of miR-127 and miR-143 in tissues from our own laboratory using our improved qPCR analysis workflow.

Sample and data preparation
For this study, DLBCL samples were collected from 8 testicular (case) and 8 nodal (control) paraffin embedded lymphomas at Aalborg University Hospital. The lymphoma tissues were collected during the diagnostic procedure in accordance with the research protocol accepted by the Health Research Ethics Committee for North Denmark Region (Approval N-20100059). Total RNA was isolated using a combined method of Trizol (Invitrogen) and Mirvana spin columns (Ambion). An amount of 10 ng total RNA was synthesized into first strand cDNA in a 15 μL reaction using TaqMan MicroRNA Reverse Transcription Kit (Applied Biosystems) according to the manufactures instruction. In total 1.33 μL cDNA was used as template in the real time PCR amplification performed by Mx3000p QPCR system (Agilent Technologies/Stratgene) with sequence specific TaqMan primers (Applied Biosystems). As reference transcripts we chose RNU-6B and RNU-24, which were less variable and equally expressed across nodal and extra-nodal samples among a larger list of candidate reference genes. For each target and reference transcripts a standard curve based on seven 2-fold dilutions was constructed on a reference sample consisting of pooled material from all 16 lymphomas.

Arabidopsis thaliana study Introduction
In order to illustrate the effect of applying variance approximations in a dataset with a limited number of dilution steps and samples we considered the arabidopsis thaliana dataset published by [7]. The dataset contains one gene of interest, MT7, potentially differentially expressed under two growth conditions of the plant arabidopsis thaliana and two reference genes ubiquitin (UBQ) and tublin.

Sample and data preparation
The arabidopsis thaliana plant growth, RNA extraction, and qPCR experiments were carried out as described in [32]. The cDNA was diluted into 1-to-4 and 1-to-16 serial dilutions. Real-time PCR experiments was performed in duplicates for each concentration [7].
Due to the study design, we naturally fitted estimation efficiencies γ ij = α −1 ij for each group. Because of the few samples we omitted the, in this case, meaningless random sample effect of the LMM.

Simulation study
In order to properly benchmark statistical test procedures one needs to have an idea of the false positive rate (FPR), or type I error rate, as well as the true positive rate (TPR), or sensitivity. As ground truth is usually not available in non-synthetic data, we use simulation experiments to determine the error rates of the discussed statistical procedures.
In our setting, the FPR of a statistical test is the probability that the test incorrectly will declare a result statistically significant given a vanishing effect size or difference of c(θ) = 0 between case and controls; i.e. FPR = P |t| > t 1−α/2,η |c(θ) = 0 . On the other hand the TPR of the statistical test is the probability that the test will correctly declare a result statistically significant given an non-zero effect size δ = c(θ) between case and controls; i.e.TPR = P |t| > t 1−α/2,η |c(θ) = δ .
A straightforward way to obtain an estimate of the TPR is to simulate a large number n of datasets under the alternative hypothesis of c(θ) = δ, fit the model for each dataset, and compute t-values t 1 , . . . , t n . From these t-scores the TPR can estimated by is the indicator function. Hence, the estimated TPR is the fraction of tests correctly declared significant.
Likewise, an estimate of the FPR is obtained by simulating n datasets under the null hypothesis of c(θ) = 0 and obtaining t-values t 1 , . . . , t n from which FPR is estimated by i.e. the fraction of tests incorrectly declared significant.
Simulating under the log-linear statistical model described above, we estimate the FPR and the TPR for each discussed method under different choices of sample sizes and number of dilutions whilst fixing the size of the sample and experimental variations. These constant sample and experimental variations corresponds to homoscedastic errors on the log-scale. No technical replications are simulated.

CIC study
The C q -values and dilution curves for the CIC study are depicted in Fig. 1 panels a-b, respectively. The simple linear regressions show well-determined standard curves with small standard errors on the estimate of the slopes.
The values of the considered estimators for C q are seen in Table 1. The table also shows results of tests for difference in gene expression assessed by the C q for both target genes MGST1 and MMSET normalized to each of the reference genes GAPDH and ACTB. We used four different methods to estimate and perform inference: (1) EC: Efficiency corrected LMM estimate ignoring the uncertainty of the efficiency estimates. (2) EC&VA1: EC and variance adjusted LMM estimate using a first order approximation. (3) EC&VA2: EC and variance adjusted LMM estimate using Monte Carlo integration. (4) Bootstrap: Estimate by the bootstrap described in Section "Inference for C q by the bootstrap method" fitting the LMM and using the EC estimate.
Consider the first section of Table 1 where tgt MGST1 is normalized against the reference GAPDH. The tests for a vanishing C q are all highly significant with comparable 95 % CIs. As expected, the efficiency corrected estimates are unchanged due to the variance adjustment, and only the standard deviation of the estimate is increased. The increase of the standard error is very small resulting in small but unimportant increases of the absolute t-and p-values. The results remain significant for the MGST1 gene. Very similar results are obtained if ACTB is used as reference. In conclusion, there is good evidence that MGST1 is differentially expressed between cell lines with high and low abundance of CICs.
For the target gene MMSET normalized with respect to both reference genes, all estimates are not significantly different from zero. Again, the various methods all agree and no substantial inter-method differences are seen and we find no evidence for differential expression of MMSET between cell lines with high and low abundance of CICs.
In all instances, the bootstrap distribution mean agree well with the estimates obtained using the delta or Monte Carlo methods while it seems to provide a larger standard error. This tendency have one or more probable explanations. The first order delta method and Monte Carlo approximations may underestimate the standard error and the bootstrap, corresponding to a higher order method, more correctly quantify it. More likely, the data deviate slightly from the model assumptions and the bootstrap is sensitive to this slight misspecification.
We see that the large number of dilution steps, as recommended and expected, ensures a low impact of the AE  on the standard error and thus on the inference of the C q .

DLBCL study
The C q -values and dilution curves for the DLBCL study are depicted in Fig. 2, panels a-b, respectively. Analogous to the previous section, the differences in gene expressions assessed by the C q for the target genes miR-127 and miR-143 with respect to each reference gene rnu6b and rnu24 were estimated using the four different methods.
Again 2000 bootstrapped samples were used. The results are seen in Table 2.
We notice the efficiency corrected estimates are exactly equal with and without variance adjustment, while the standard deviation of the estimate and the p-values are higher for the adjusted values as expected. The size of the increase is again undramatic hinting at well determined AE using the dilution curves.
For all combinations of reference genes the estimates for miR-127 are significantly different from zero at the usual  EC efficiency corrected LMM estimate ignoring the uncertainty of the efficiency estimates. EC&VA1 EC and variance adjusted LMM estimate using the delta method. EC&VA2 EC and variance adjusted LMM estimate using Monte Carlo integration. Bootstrap Estimate by the bootstrap described in Section "Inference for C q by the bootstrap method" fitting the LMM and using the EC estimate. Bootstrap shows the mean and standard deviation of 4 bootstrap samples using the EC estimate. The last two columns show the 95 % lower and upper confidence interval limits 5 % significant level, but not at the 1 % significance level. The miR-143 estimates are not significantly different from zero. Despite the very small increase in standard error, the p-values increase at the second digit.
The bootstrap method provides a standard deviation similar to the delta method and Monte Carlo integration for both miR-127 and miR-143.
Regarding the biological interest, we conclude there is evidence for a difference in miR-127 expression between testicular and nodal DLBCL whilst the data is not compatible with difference in miR-143 expression. While the AE estimate had no influence in these cases a change in significance is easily imagined in other cases.

Arabidopsis thaliana data
The C q -values and dilution data for the arabidopsis thaliana data are shown in Fig. 3.
The estimated difference in gene expression between case and control of the target gene MT7 normalized to either reference (Tublin or UBQ) is seen in Table 3. The   Fig. 3 Overview of Arabidopsis thaliana data [7]. C q -values against the dilution step for case and control samples. Dilution data are present for both the target (MT7) and reference genes (Tublin, UBQ) table shows the efficiency corrected method with and without variance adjustment by the delta method. In both cases, we see a dramatic increase in the standard error, p-values, and size of the confidence intervals. When using variance adjustment there is no longer a highly statistical significant difference in MT7 expression between case and ctrl growth conditions. The results may be surprising at first sight when considering the relatively small standard errors of the slopes in the simple linear regressions shown in Fig. 3. One might imagine that the uncertainty of the AE is negligible and thus perform the usual analysis. However, we see the contrary for several reasons. First, using only 3 dilutions steps leaves very few degrees of freedom left in each group as we are left with few samples and a high number of parameters to be estimated. Secondly, as dilution curves are used for each group the four group-specific AE estimates will all contribute to increasing the standard error of the C q . While this example was selected as a worst-case scenario, it should illustrate that although the standard curves are seemingly well determined, it is hard to intuitively predetermine the combined effect on the standard error of C q . We note here, that no pre-averaging of the technical replicates for each concentration was done. Instead, the technical replicates where modeled as a random effect.

Simulation study
First, we present results of a simulation study for a twosided test for the null hypothesis of a vanishing C q at a 5 % significance level. We simulated 2000 datasets under both the null and alternative hypothesis with 6 samples in each case and control group and standard curves with 6 dilution steps. The effect size under the alternative was set to δ = 10/9. The sample and experimental standard deviations were set to σ S = 1 and σ = 1, respectively. The AE for the target and reference genes were set to 0.80 and 0.95, respectively. The parameters were primarily chosen to conveniently yield estimates and error rates on a sensible scale whilst secondarily being comparable to estimated quantities in the applications.
The four discussed methods were applied to the 2×2000 datasets and the p-value testing the null hypothesis were computed. The results of these tests are summarized in Table 4 from which the FPR and TPR can be computed at the 5 % cutoff. From Table 4, we see the estimated FPRs are 0.073, 0.053, and 0.083 for the efficiency corrected LMM (EC), the efficiency corrected LMM with variance adjustment using the delta method (EC&VA1), and the bootstrap, respectively. We omitted EC&VA by Monte Carlo integration here due to the computational cost and the similar results with EC&VA1 in the previous. As expected, the EC method does not control the FPR at the 5 %-level. The variance adjusted estimator is consistent with controlling the FPR at the 5 % level. By construction, the variance adjusted will always perform at least as good as the EC in terms of FPR. Surprisingly, the bootstrap has the worst performance in terms of FPR.
Secondly, the TPR are estimated to be 0.3825, 0.3175, 0.366 for three methods EC, EC&VA1, Bootstr., respectively. As expected, we notice that an improved FPR comes a the cost of a decreased TPR for a given statistical procedure.
The above simulations were repeated for sample sizes 4 or 8 in both case and control groups in combination with 4 or 8 dilution steps with the same simulation parameters. Figure 4 shows the performance of the methods in terms of FPR and TPR. Each panel corresponds to a given number of samples and dilutions. In each panel the p-value cut-off is varied between 0.01, 0.05, and 0.1. Table 4 Contingency tables for the different estimators for at 5 % p-value threshold Overall, we see that the EC&VA estimate is the only procedure consistent with controlling the FPR at the nominal chosen significance level. Likewise, for many dilutions, the difference between the EC and EC&VA procedures diminish as the uncertainty of the AE is relatively low. As expected a decrease in FPR corresponds to a decrease in TPR.
To gauge when the standard error of (5) is determined with adequate precision, we simulated 2 × 2000 datasets and computed the mean standard error of the C q for the EC and EC&VA procedures as a function of the number of dilutions and samples. We varied the number of dilutions in the range 4-9 for a number of samples in the range 4-10 with the same settings as above. Figure 5 shows these results. As expected, increasing the number of samples or the number of dilutions yield a smaller standard error. Also unsurprising and as already seen in the applications, the differences in the standard error for the EC and EC&VA methods are very substantial for a small number of dilutions and vanish as the number of dilutions steps increase. The differences in the standard error seems to be larger under the alternative than the null hypothesis. Similar figures might also aid in designing qPCR experiments and help determine if investing in additional dilutions or samples is preferable-obviously with properly chosen simulation parameters in the given context.

Discussion and conclusion
The commonly used efficiency corrected C q and many other approaches to the analysis of qPCR data disregards the uncertainty of the estimated AE leading to increased false positive rates. As qPCR experiments are often used for validation this is highly undesirable. Our primary approach based on the statistical delta-method to approximate the variance of the efficiency adjusted C q , shows that it is possible to perform statistical inference about qPCR experiments whilst more properly accounting for the AE uncertainty. We stress that the problem is not limited to the C q statistic, but all statistics that depend on the AE.
In this paper, we focus on the widely used dilution curve approach as it is complex enough to capture the most important variations but also simple enough to be easily interpretable. This is probably also the reason dilution curves, when carefully used, are still very popular in qPCR experiments. However, this approach has been criticized in the literature as it relies on a number of assumption [12][13][14][15]. First, the dilution curve approach assumes the AE to be constant up to the C q 'th cycle. A way to check this is to assess whether the dilution curves are reasonably linear, as non-constant behaviour up to the C q 'th cycle in our dynamic range would cause the dilution curves to level off. In our data examples, we do not see any violation of the linear behaviour. Secondly, one also assumes the AE varies between wells. However, if we assume that the actual reciprocal AE γ A varies around a true reciprocal AE value γ , by a random variation , say, which could be assumed to be Gaussian distributed with mean 0 and variance σ 2 one arrives at the following model for C q C q = μ + (γ + )N 0 + = μ + γ N 0 + ( + N 0 ) error term From this follows the proposed cycle dependent variation is captured by the error term in the LMM. We note, however, due to the multiplication with N 0 a variance heterogeneity could be present. We have therefore assessed the LMMs by plotting the residuals against the fitted values and noticed no variance heterogeneity to be present. In conclusion we find our model sufficient to capture the variation for the problems we have at hand.
In practice, the approach was used to: (1) validate that MGST1 is differentially expressed between MM cell lines of high and low abundance of CICs, (2) analyze and study the hypothesis that miRNA-127 is differentially expressed between testicular and nodal DLBCL, and (3) illustrate the effect of a small number of dilution steps.
In the latter application, we saw a dramatic increase in the standard error of the estimate when the variance approximation was introduced, potentially leading to a change of significance for the presented dataset depending on the desired significance level. This illustrates the importance of considering the AE uncertainty when conducting AE correction of qPCR experiments.
Although not in the context of C q estimation, Tellinghuisen and Speiss [10] concluded that uncertainty as well as bias of the AE estimate substantially impacts subsequent quantities. They highlight that some methods achieve very good performance as measured by the low standard errors by tacitly assuming the AE known and fixed to 2. This is an unsurprising consequence as it is essentially the same as disregarding the AE uncertainty. We also note, as seen in this paper, that a low standard error in itself is not always a proper benchmark of procedures.
Problems with uncertainty in AE estimates should be handled by establishing well-estimated dilution curves as argued elsewhere [6], however even in this case the presented method also allows for design guidelines for power calculations and assessing the influence of the estimated dilution curves.
It is also noteworthy that model based estimation of the C q also allows for model checking by e.g. residual plots.
Lastly, we note that the algorithm [28] we used for threshold selection and C q -value extraction in the CIC and DLBCL studies may not be optimal-cf. [9,33], and improvements by [34]-as it can be affected by the AE.
Nonetheless, this has no bearing on the stated problem of this paper. The estimated standard error of C q is still affected in a similar manner by the uncertainty of the AE and thus too optimistic.
Despite the extensive use of qPCR, more statistical research is needed to establish qPCR more firmly as a gold standard to reliably quantify abundances of nucleic acids. Researchers analyzing qPCR experiments need to model their experiments in detail, e.g. via linear or nonlinear (mixed) models, as the propagation of uncertainty needs to be carefully assessed and accounted for. This is necessary for making valid inferences and upholding the common statistical guarantees often erroneously assumed to be automatically fulfilled. We recommend the conservative and proper approach of always accounting for the uncertainty of the AE.