Finite sample bounds for expected number of false rejections under martingale dependence with applications to FDR

Much effort has been made to improve the famous step up test of Benjamini and Hochberg given by linear critical values $\frac{i\alpha}{n}$. It is pointed out by Gavrilov, Benjamini and Sarkar that step down multiple tests based on the critical values $\beta_i=\frac{i\alpha}{n+1-i(1-\alpha)}$ still control the false discovery rate (FDR) at the upper bound $\alpha$ under basic independence assumptions. Since that result in not longer true for step up tests or dependent single tests, a big discussion about the corresponding FDR starts in the literature. The present paper establishes finite sample formulas and bounds for the FDR and the expected number of false rejections for multiple tests using critical values $\beta_i$ under martingale and reverse martingale dependence models. It is pointed out that martingale methods are natural tools for the treatment of local FDR estimators which are closely connected to the present coefficients $\beta_i.$ The martingale approach also yields new results and further inside for the special basic independence model.


Introduction
Multiple tests are nowadays well established procedures for judging high dimensional data. The famous Benjamini and Hochberg [2] step up multiple test given by linear critical values controls the false discovery rate FDR for various dependence models. The FDR is the expectation of the ratio of the number of false rejections devided by the amount of all rejected hypotheses. For these reasons the linear step up test is frequently applied in practice. Gavrilov et al. [11] pointed out that linear critical values can be substituted by for step down tests and the FDR control (i.e. FDR α) remains true for the basic independence model of the underlying p-values. Note that the present critical values β i are closely related to critical values given by the asymptotic optimal rejection curve which is obtained by Finner et al. [8]. In the asymptotic set up they derived step up tests with asymptotic FDR control under various conditions. However, step up multiple tests given by the β i 's do not control the FDR by the desired level α at finite sample size, see for instance Dickhaus [7], Gontscharuk [12]. The intension of the present paper is twofold.
• We like to calculate the FDR of step down and step up tests more precisely using martingale and reverse martingale arguments. Here we get also new results under the basic independence model. • On the other hand we can extend the results for dependent p-values which are martingale or reverse martingale dependent. As application finite sample FDR formulas for step down and step up tests based on (1.1) are derived. We refer to the Appendix for a collection of examples of martingale models.
Martingale arguments were earlier used in Storey et al. [22], Pena et al. [17], Heesen and Janssen [14] for step up and in Benditkis [1] for step down multiple tests. This paper is organized as follows. Below the basic notations are introduced. Section 2 presents our results for step down tests. A counterexample, Example 4, motivates to study specific dependence concepts which allow FDR control, namely our martingale dependence model. The FDR formula, see (1.7) below, consists of two terms. In particular, it relies on the expected number of false rejections which is studied in Sections 2.1 and 2.2. Note that the results of Lemma 6 motivate naturally the consideration of martingale methods. Section 2.3 is devoted to the FDR control under dependence which extends the results of Gavrilov et al. [11]. Within the class of step down tests the first coefficient β 1 is often responsable for the quality of the multiple test. In Section 2.4 we propose an improvement of the power of SD procedures due to an increase of first critical values without loosing the FDR control.
Step up multiple tests corresponding to the β's from (1.1) are studied in Sections 3 and 4. We obtain the lower bound for the present FDR which can be greater than α. A couple of examples for martingale models can be found in Appendix. The proofs and additional material are collected in the Section 5.
Basics. Let us consider a multiple testing problem, which consists of n null hypotheses H 1 , ..., H n with associated p-values p i , i = 1, ..., n. Assume that all p-values arise from the same experiment given by one data set, where each p i can be used for testing the traditional null H i . The p-values vector p = (p 1 , ..., p n ) ∈ [0, 1] n is a random variable based on an unknown distribution P. Recall that simultaneous inference can be established by so called multiple tests φ = φ(p), φ = (φ 1 , ..., φ n ) : [0, 1] n → {0, 1} n , which rejects the null H i iff, i.e. if and only if, φ i (p) = 1 holds. The set of hypotheses can be divided in the disjoint union I 0 I 1 = {1, ..., n} of unknown portions of true null I 0 and false null I 1 , respectively. We denote the number of true null by n 0 = |I 0 | and the number of false ones by n 1 = |I 1 | = n − n 0 , where n 0 > 0 is assumed. Widely used multiple testing procedures can be represented as φ τ = (I(p 1 τ ), ..., I(p n τ )) via the indicator function I(·), where τ ∈ [0, 1] is a random critical boundary variable. Thus all null hypotheses with related p-values that are not larger than the threshold τ have to be rejected. Let p 1:n p 2:n · · · p n:n denote the ordered values of the p-values p. Definition 1. Let α 1 α 2 · · · α n be a deterministic sequence of critical values. Set for convenience max{∅} = 0.
(a) The step down (SD) critical boundary variable is given by τ SD = max{α i : p j:n α j , for all j i}.
( (b) The step up (SU) critical boundary variable is given by The appertaining multiple tests φ SD = φ τ SD and φ SU = φ τ SU are called step down (SD) test, step up (SU) test, respectively.
LetF n denote the empirical distribution function of the p-values and let I(p i τ ) = nF n (τ ) be the number V of false rejections w.r.t. τ , the number S of true rejections and the number R of all rejections, respectively. The False Discovery Rate (FDR) of a procedure with critical boundary variable τ is defined as with the convention 0 0 = 0. The FDR is often chosen as an error rate control criterion. There is another useful equivalent description of step down tests.

Remark 2. Introduce the random variable
where a ∧ b = min(a, b) denotes the minimum of two real numbers a and b.
Then we have τ SD σ but the step down tests φ SD = φ σ coincide and The reason for this is that no p-value falls in the interval (τ SD , σ] and R(τ SD ) = R(σ) is valid.
There is much interest in multiple tests such that the FDR is controled by a prespecified acceptable level α ∈ (0, 1), i.e. to bound the expectation of the portion of false rejections. The well known so called Benjamini and Hochberg multiple tests with linear critical values α i = α i n lead to the FDR bound FDR α n 0 n for SD and SU tests under positive dependence, more precisely under positive regression dependence on a subset (PRDS). There are several proposals to exhaust the FDR more accurate by α by an enlarged choice of critical values. A proper choice for SD tests are α i , 1 i n, which allow the control FDR α under the basic independence assumption of the p-values, see Gavrilov et al. [11]. Note that for i = 1, ..., n, where g α is close to the asymptotic optimal rejection curve f α , see Finner et al. [8]. It is known that SU tests given by β i do not control the FDR for the independence model in general, see Gontscharuk [12], Heesen and Janssen [14]. If the p-values are dependent then the FDR control of the SD tests based on β i , i n, can not be expected (see Example 4 of Section 2). Gavrilov et al. [11], Theorem 1A, propose to reduce the critical values β i in order to get FDR control of SD-tests under positive regression dependence on a subset. Unfortunately, the procedure based on these new reduced critical values may be too conservative. Below we keep the critical values α i , i n, of (1.5) and introduce dependence assumptions for the p-values which insure the FDR-control for the underlying SD tests.
The main idea of this paper can be outlined as follows. The FDR of SD and SU tests based on the critical values β i equals A monotonicity argument implies the next Lemma.
Lemma 3. Consider an SD or SU test with critical values (α i ) i given by (1.5). Then ensure the FDR control, i.e. FDR α.
Whereas the FDR is hard to bound under dependence, the inequality (1.8) is known under PRDS and equality holds under reverse martingale structure (including the basic independence model), see Heesen and Janssen [14] for SU test. Then it remains to bound the expected number of false rejections E [V ], which is at least possible for SD tests under certain martingale dependence assumptions. In the following we always use a general assumption, that the p-values for the true null (H i ) i∈I0 fullfil n 0 t for all t ∈ [0, 1), (1.10) which can be interpreted as "stochastically larger" condition compared with the uniform distribution in the mean for I 0 .
Now, we define the basic independence assumptions (BIA) that are often used in the FDRcontrol-framework.
(BIA) We say that p-values fulfil the basic independence model if the vectors of p-values (p i ) i∈I0 and (p i ) i∈I1 are independent, and each dependence structure is allowed for the "false" pvalues within (p i ) i∈I1 . Under true null hypotheses the p-values (p i ) i∈I0 are independent and stochastically larger (or equal) compared to the uniform distribution on [0, 1], i.e., P (p i x) x for all x ∈ [0, 1] and i ∈ I 0 .
If in addition all p-values are i.i.d. uniformly distributed on [0, 1] for i ∈ I 0 then we talk about the BIA model with uniform true p-values.

Results for step down procedures
In this section we consider a step down procedure with critical values β i , i n, from (1.1). It is well known that this procedure controls the FDR if the p-values fulfil the basic independence assumptions (BIA) (cf. Gavrilov et al. [11]). However, in practice the independence of the single tests corresponding to the present p-values are rare. For general dependent p-values the FDR of the SD test may exceed the level α. The next counter example motivates the consideration of special kinds of dependence in order to establish FDR control.
For such p-values we get We will start with the expected number of false rejections (ENFR), which was earlier studied by Finner and Roters [10] and Scheer [19].

Control of the expected number of false rejections E [V ]
The present martingale approach relies on the empirical distribution functionF n of the p-values and on the adapted stochastic process , s t, s, t ∈ T, i n} of the p-values. (2.1) Thereby, T ⊂ [0, 1) is a parameter space with 0 ∈ T. The valueα(t) is frequentely used as a conservative estimator for the FDR on the constant critical boundary value τ = t. Storey et al. [21] use a similar estimator for the FDR(t) of SU tests if the p-values are independent. A similar estimator is also used by Benjamini, Krieger and Yekutieli [3], Heesen and Janssen [15] and Heesen [13]. It is easy to see that for β i , i n, we get from (1.5) The consequences of these useful relations are summerized. It is quite obvious that the maximal coefficiens β i , i n, of the α s in (1.5) and the extreme p-values p i = 0, i ∈ I 1 , for all false null are least favourable for bounding E [V ] . First, we focus on the β i −based SD procedure. An important role plays the process The probability P (R(τ SD ) = n) is typically very small. Note that we will show below by martingale arguments that E [M I0 (τ SD )] 0, which implies the crucial condition (1.9).
Next, we introduce a dependence assumption which allows the control of expected number of false rejection of the SD procedure with critical values β i , i n.
Note that the super-martingale model (D1) includes BIA if J = I 0 . This is well known, see Shorack and Wellner [20] (p. 133), Benditkis [1]. Some examples of martingale dependent random variables can be found in a separate Appendix.
Recall that the general condition (1.10) implies E [M I0 (0)] = 0, which is always assumed. The next remark shows that under (D1) we can assume that the p-values which belong to true null are stochastically larger compared with the uniform distribution on (0, 1) (cf. Heesen and Janssen [14], Benditkis [1]). Let U (0, 1) denote the uniform distribution on the unit interval.
i ∈ I 0 , are stochastically larger compared with U (0, 1). (c) As long as the boundary critical value τ only depends on the order statistics, the multiple test φ τ remains unchanged if (p i ) i n is substituted by (p σ(i) ) i∈I0 , (p i ) i∈I1 . (d) It can be shown that under (D1) the (super-)martingale assumption also holds under the filtration given by the exchangeable (Y i ) i∈I0 , (p i ) i∈I1 . Note that the exchangeability of the p σ(i) , i ∈ I 0 , is only needed in the proofs in connection with the PRDS assumption introduced in Section 2.3.
Proof of (a) and (b). Firstly, note that the random variables Y i , i ∈ I 0 are exchangeable, since σ is an independent permutation. This implies Now, we formulate the main result of this subsection under the super-martingale assumption, which will be applied to our equality (1.7). Theorem 8. Consider the SD multiple procedure with critical values β i , i n, given in (1.5). Suppose that the super-martingale assumption (D1) holds with J = I 0 and T = {0, β 1 , ..., β n }.We get

Consequences under Dirac-Martingale configurations
In this subsection we consider the following assumptions Structures that fulfil the assumptions (i) and (ii) are called Dirac martingale configurations DM(n 1 ). The part (a) of the next lemma proposes exact formulas for ENFR for step down tests with critical values β i . Part (b) derives a lower bound for ENFR if the (p i ) i∈I1 are by accident uniformly distributed which is another example for extreme ordering compared with (ii).
Lemma 9 (Some exact formulas for the ENFR). Suppose that the martingale assumption (i) hold. Let τ SD be the critical boundary value, which corresponds to critical values β i . (a) Assume additionally (ii) then Then, each p i , i n, is uniformly distributed and

Control of the FDR
As mentioned in Lemma 3 the control (1.9) of the ENFR is not enough for the FDR control. We by n 0 . To do this we need further assumptions.
The dependence assumption (D2) is well-known in the FDR-framework. Benjamini and Yekutielli [5] proved that the Benjamini and Hochberg linear step up test controls the FDR under such kind of positive dependence. Gavrilov et al. [11] have shown that in this case the FDR of the step down procedure using critical values β i , i n, may exeed the pre-chosen level α. Theorem 11 proves the FDR control of that SD test under the additional super-martingale assumption.
Theorem 11. Let (p i ) i∈I fulfil the super-martingale assumption (D1) with T = {0, β 1 , ..., β m } and the PRDS assumption (D2) on I 0 . Then we have for β i −based SD procedure The next lemma is a technical tool for the proof of Theorem 11.
Remark 13. (a) Lemma 12 remains true for any random variable τ = τ (p), which is a nonincreasing function of p i , i ∈ I 0 , and has a finite range of values {a 1 , ..., a m }, 0 < a 1 a 2 ... a m for some m ∈ N. That means that can be always bounded under PRDS. The exact structure of the random variable τ is not important. The inequality remains true for SD as well for SU tests. (b) Theorem 11 remains true for any FT − stopping timeτ withT The next theorem shows that we can relinquish the PRDS assumption (D2) under some modification of the assumption (D1).
Observe that the filtrations F T and F f T are different. The martingale condition w.r.t. F f T holds if M I0 is a martingale conditioned under the outcomes (p i ) i∈I1 , which is weaker than BIA. Although the presented bound α 1−α is slightly larger than α, the inequality can get a gain if the

Improvement of the power
In this subsection we concentrate on the power of FDR-controlling procedures, which can be characterized by the value E[S(τ )] n1 for n 1 > 0. Let us consider a SD procedure with arbitrary critical values α i , i n, which controls the FDR. Then we can increase the corresponding critical boundary value τ SD and improve the power of this procedure without loss of the FDR control under the PRDS assumption. Note that the result seems to be new also for the BIA model. 1. the random variables (p i ) i∈{1,...,n} satisfy (D2) on I 0 , 2. (p i ) i∈I0 and (p i ) i∈I1 are stochastically independent, 3. let each p i , i ∈ I 0 be stochastically larger than U (0, 1), 4. the SD procedure using critical values α i , i n, controls the FDR at level α under 1.-3.
Then a SD procedure using critical values n is the smallest critical value of the SD procedure which was proposed by Benjamini and Liu [4]. The procedure of Benjamini and Liu controls the FDR under BIA, see Benjamini and Liu [4], and under PRDS assumption, see Sarkar [18].

Results for SU Procedures
It is well known that the FDR of the SU multiple tests with critical values β i , i n, see (1.7), may exceed the prespecified level α. In particular, by Lemma 3.25 of Gontscharuk [12] the worst case FDR is greater than α in the limit n → ∞. The reason for this is that E [V (τ SU )] may exceed the bound α 1−α (n 1 + 1) under some Dirac uniform configurations. Below the critical values β i are slightly modified in order to get finite sample FDR control. Main tools for the proof are reverse martingale arguments which were already applied by Heesen and Janssen [14] for step up tests, which extend results for BIA models. Introduce the reverse filtration G T t = σ((I(p s), 1 i n, s t), s, t ∈ T ) given by the p-values.
(R) Let T ⊂ (0, 1] be a set with 1 ∈ T . We say that p-values p 1 , ..., p n are G T t −reverse super- with equality "=" if the reverse super-martingale is a reverse-martingale. Remark 19. The inequality (3.1) is also fulfilled under the so called "dependency control condition", which was proposed by Blanchard and Roquain [6]. Note that the assumption (R) and the dependency control condition do not imply each other.
Lemma 18 applies to various SU tests.
(a) (SU tests given by (a i ) i .) The variable -Introduce τ 0 = max{t ∈ [0, 1] :F n (t) k n } and the reverse stopping timẽ  given by a Dirac uniform configuration, where F DR DU (n1) (n, δ) denotes the step up FDR under DU(n 1 ) with uniformly distributed p-values p i for i ∈ I 0 . Recall from Heesen and Janssen [14] that there exists a unique parameter κ n = δ ∈ (0, 1 − α) for the coefficients (3.5) with with larger (smaller) worst case FDR for δ > κ n (δ < κ n , respectively). The solution κ n can be found by checking the maximum (3.6) of a finite number of constellations. The next theorem introduces the asymptotics of the present SU tests under the basic independence model.  (b) Assume that δ n → δ ∈ (0, 1 − α) and let the portion n0 n c be limited by some constant α < c < 1. Then lim sup n→∞ sup P BI(n) , n0 cn Remark 22. Theorem 21 together with the finite sample adjusted SU tests at parameter κ n , see (3.7), can be viewed as a finite sample contribution to the program of Finner et al. [8], who got the asymptotically optimal rejection curve for SU tests.

Finite results for SU tests using critical values of Gavrilov et al.
Consider below a SU test using critical values β i = iα n+1−i(1−α) , i n. As mentioned above, this SU test may exceed the FDR level α under some Dirac uniform configurations (cf. Gontscharuk [12], Heesen and Janssen [14]). As we can see from the Figure 1, the FDR of the SU test may be larger than prechosen level α in contrast to the SD test based on the same critical values β i , i n. The next theorem gives an explanation in terms of the ENFR.
Theorem 23. Let (p i ) i∈I0 fulfil the reverse martingale dependence assumption (R) and p j = 0 Moreover, we have ">" in (a) and (b) if n 0 > f (n).
In the concrete situation of Figure 1 we observe f (50) = 9.8, which is visible in the first grafic.

The proofs and technical results
The proof of Lemma 3 is obvious. Proof of Lemma 5. (a) Firstly, consider the case {β i : p i:n > β i , i n} = ∅ and define j * = min{i : p i:n > β i , i n}. Then, we get σ = β j * due to the definition of σ. This implies β j * < p j * :n and β i p i:n for all i j * − 1.
Since τ SD is not a stopping time w.r.t. F T we will turn to the critical boundary σ in order to apply Lemma 5.
1. The proof of the next Lemma 12 uses the technique which was proposed by Finner and Roters [9] for the proof of FDR-control of Benjamini and Hochberg test under PRDS. 2. As long as we are vconcerned with the super-martingale assumption (D1) we may assume w.l.o.g. that (p i ) i∈I0 are identically distributed and stochastically larger than U (0, 1), c.f Remark 7. These technical tools are only used for the subsequent proofs of Sections 2.2 -2.4 in connection with PRDS. The reference to Remark 7 is not cited again in each step of the proofs.
Proof of Lemma 12. Let us define β 0 = 0 for technical reasons and denote (U j ) j n0 := (p i ) i∈I0 . Firstly, note that τ SD = β R holds obiously. Thereby, R = R(τ SD ) is the number of rejections of the SD procedure with deterministic critical values β i , i n. We obtain the following sequence of (in)equalities: The inequality in (5.10) is valid since U 1 , ..., U n0 are stochastically greater than U (0, 1). The inequality in (5.11) holds because the function x → I(β R (p) is increasing in x for all i ∈ {1, ..., n 0 } and since U 1 , ..., U n0 are assumed to be PRDS. Consequently, using the telescoping sum we obtain the first equality in (5.12). The proof is completed because β R (p) β n by definition of β R = τ SD .
Proof of Theorem 11. Combining Lemma 3, Theorem 8 and Lemma 12 yields the statement.
To prove Theorem 14 we need the following technical result. Proof of Lemma 25. First, note that the process Define β 0 = 0. Now, we can continue the chain of equalities (5.13) as follows.
because the first term in (5.14) is equal to zero due to the telescoping sum since E [M (β n )] = 0. Now, we will show that holds for k 2. Then we prove that The last inequality follows from (5.21) and Lemma 26. Now, we are able to prove Theorem 14.
Proof of Theorem 14. Let us remind (5.1), which implies Dividing by S(σ) + 1 yields Taking the expectation E [·] and using Lemma 25 deliver the result. To prove Lemma 15 we need the following technical result.
Lemma 28. Let (b i ) i∈I0 be some set of real numbers with b i ∈ [0, 1], i ∈ I 0 . If p-values (p i ) i n are PRDS on I 0 then Proof. W.l.o.g. let us assume that I 0 = {1, ..., n}. For any other subset I 0 the proof works in the same way. First, we show the inequality To do this let F denote the marginal distribution function of Then (5.24) is equivalent to From the mean value theorem for Riemann-Stieltjes integrals we can deduce that there exist some values ξ 1 and ξ 2 with Since f is an increasing function of u, (5.26) yields ξ 1 ξ 2 , hence we get (5.25). Further, we obtain The inequality in (5.29) holds due to the PRDS-assumption since according to the law of total probability we have (5.24) .

(5.30)
For the two following proofs we define the p-values, which belong to the true null by (U i ) i∈{1,...,n0} = (p j ) j∈I0 . Now, we are able to prove Lemma 15 by conditioning under the portion f belonging to the false null.
Proof of Lemma 15. We consider an arbitrary FDR-controlling SD procedure that uses critical values α i , i n. Let us define j * := max{i : f j 1 − (1 − α) 1 n for all j i} and consider two possible cases.
1. Let j * = 0. In this case we have under PRDS assumption where the last inequality is valid due to Lemma 28. 2. Let j * > 0. Define the vector f * 0 = (0, ..., 0, f j * +1 , ..., f n1 ), where the j * first coordinates are replaced by 0. We get Thereby, τ SD is the critical boundary value corresponding to the SD procedure with critical values α i , i n.
Proof of Lemma 17. Note that E V R = E I(f 1 d 1 ) V R + I(f 1 > d 1 ) V R is always valid and let us consider two different cases: (a) f 1 d 1 , (b) f 1 > d 1 .
(a) Since f 1 will be rejected, the equality E f0 V R = E f V R holds. Therefore the statement of the lemma is proved for this case.
(b) If f 1 > d 1 holds, we have due to the PRDS assumption that where U 1:n0 is the smallest true p-value. Hence, we get E f V R α for all possible vectors , which completes the proof.
Proof of Lemma 18. The optional stopping theorem for reverse martingales implies It is quite obvious that the variables (3.2) and (3.4) are stopping times and Lemma 18 can be applied. The proof of Theorem 21 requires some preparations. Consider a wider class of rejection curves given by positive parameters b and α, δ 0 and Note that the condition g(1) > 1 is necessary for proper SU tests with critical values By the choice b = n+1 n and δ = 1 − α the coefficients β i are included. Thus we arrive at the following equation for the FDR of (5.34) for each multiple test. In contrast to SD tests the term E V a R can be bounded under the Rsuper-martingale condition, cf. Heesen and Janssen [14].
Remark 29. Consider SU tests for parameters (δ, b, α) under R-super-martingale models with fixed portion n 1 < n of false p-values.
as n → ∞. To obtain the upper bound we can first exclude all coefficients δ n = 0, which correspond to a Benjamini and Hochberg test with level nα n+1 . Fix some value γ with lim sup n→∞ δ n < γ < 1 − α and introduce the rejection curve g γ (t) = t γt+α . For all δ n < γ the FDR(n, δ n ) of the a i 's can now be compared with the FDR of the SU test with critical values g −1 γ i n . By (5.35) and Remark (29) we have for each regime FDR(n, δ n ) FDR n (g −1 γ ) using obvious notations. The worst case asymptotics is given by Theorem 5.1 of Heesen and Janssen [14] lim sup n→∞ sup BI(n) (b) Similarly as above the FDR(n, δ n ) is bounded below and above by the FDR of SU given by rejection curves. Choose constants 0 < γ 1 < δ < γ 2 < 1 − α and b > 1 and consider δ n ∈ (γ 1 , γ 2 ) and large n with n+1 n b. Introduce g γ2 (t) = t γ2t+α and g γ1,b (t) = tb γ1t+α . Again we have FDR n (g γ1,b ) FDR(n, δ n ) FDR n (g γ2 ).
Proof of Theorem 23. Define the process M (t) , which is, obviously, a centered reverse martingale w.r.t G T t due to the reverse martingale assumption. Now, we remind that for the step-wise procedure using critical values (β i ) i n the following equality is valid by (5.2): under the Dirac distribution of "false" p-values (p i ) i∈I1 . Thus, we have to show to prove the part (a). First note, that because of V (τ SU ) = V (τ SU ∨ β 1 ) it is enough to prove the statement of this theorem for the reverse stopping timeτ SU = τ SU ∨β 1 , whereτ SU ∈ {β 1 , ..., β n }.

n+3
. This completes the proof of part (a). (b) The second part follows immediately from (a) and from the formula for FDR of SU procedure based on the set of critical values (β i ) i n under the reverse martingale model: Appendix. Examples of martingale models.
The family of (super-)martingales is a rich class of models which is briefly reviewed below. In this section we present a couple of examples. Further examples can be found in Heesen and Janssen [14] and Benditkis [1]. For convenience let us describe the model in this section by distributions P on [0, 1] n , where the coordinates (p 1 , ..., p n ) ∈ [0, 1] n represent p-values. We restrict ourselves to martingale models [0, 1] n , P, (F T t ) t∈T . (5.42) Let M T I0,I1 denote the set of martingale models P on [0, 1] n for fixed portion I 0 = ∅, I 1 of {1, ..., n} and {0} ⊂ T ⊂ [0, η] for some 0 < η < 1. Obviously, there is a one to one correspondence between martingales and reverse martingales via the transformatioñ p i = 1 − p i ,T = {1 − t : t ∈ T }, GT t := σ(I(p i s), s t, s, t ∈T ) (5.43) of (5.42). Note also that (p i ) i∈I0 follow special copula models if each p i , i ∈ I 0 , is uniformly distributed on (0, 1). To warm up consider first some useful elementary examples which will be combined below.
Example 30. (a) (Marshall and Olkin type dependence (see Marshall and Olkin [16])) Let X 1 , ..., X n be i.i.d. continuous distributed real random variables and Y be a continuously distributed real random variable independent of X 1 , ..., X n . Consider Z i := min(X i , Y ) and (I 0,j + I 1,j ) splits in k disjoint portions of I 0,j the true and I 1,j false null. Let U 1 , ..., U k denote i.i.d uniformly distributed random variables on (0, 1). Suppose that (U 1 , (p i ) i∈I1,1 ), (U 2 , (p i ) i∈I1,2 ), ..., (U k , (p i ) i∈I 1,k ) ) are independent martingale models of dimension |I 1,j | + 1 for j k. The U 's can be duplicated by the definition on T for some constant K max( s 1−s ), s ∈ T. Then the process t → X t can be viewed as a discounted price process for time points t ∈ T. The existence of martingale measures for (X t , F t ) t∈T on the domain [0, 1] n is well studied in mathematical finances, see Shiryaev (1999). When the parameter set T is finite it turns out that the space of probability measures on [0, 1] n , making that process to be a martingale, is of infinite dimension. (d) (Super-martingales) It is well known that the process i∈I0 I(p i t) − t 1 − t = M t + A t , t ∈ T (5.44) admits a Doob-Meyer decomposition given by a (F t ) t∈T martingale t → M t and a compensator t → A t which is predictable with A t = 0. Note that (5.44) is a supermartingale if t → A t is non-increasing.