A Simple Optimal Contention Resolution Scheme for Uniform Matroids

Contention resolution schemes (or CR schemes), introduced by Chekuri, Vondrak and Zenklusen, are a class of randomized rounding algorithms for converting a fractional solution to a relaxation for a down-closed constraint family into an integer solution. A CR scheme takes a fractional point $x$ in a relaxation polytope, rounds each coordinate $x_i$ independently to get a possibly non-feasible set, and then drops some elements in order to satisfy the constraints. Intuitively, a contention resolution scheme is $c$-balanced if every element $i$ is selected with probability at least $c \cdot x_i$. It is known that general matroids admit a $(1-1/e)$-balanced CR scheme, and that this is (asymptotically) optimal. This is in particular true for the special case of uniform matroids of rank one. In this work, we provide a simple and explicit monotone CR scheme for uniform matroids of rank $k$ on $n$ elements with a balancedness of $1 - \binom{n}{k}\:\left(1-\frac{k}{n}\right)^{n+1-k}\:\left(\frac{k}{n}\right)^k$, and show that this is optimal. As $n$ grows, this expression converges from above to $1 - e^{-k}k^k/k!$. While this asymptotic bound can be obtained by combining previously known results, these require defining an exponential-sized linear program, as well as using random sampling and the ellipsoid algorithm. Our procedure, on the other hand, has the advantage of being simple and explicit. This scheme extends naturally into an optimal CR scheme for partition matroids.


Introduction
Contention resolution schemes were introduced by Chekuri, Vondrak, and Zenklusen [7] as a tool for submodular maximization under various types of constraints. A set function f : 2 N → R is submodular if for any two sets A ⊆ B ⊆ N and any element v / ∈ B, the corresponding marginal gains satisfy f (A ∪ {v}) − f (A) ≥ f (B ∪ {v}) − f (B). Submodular functions are a classical object in combinatorial optimization and operations research [15]. A family of subsets I ⊆ 2 N is called an independence family if ∅ ∈ I and A ⊆ B ∈ I implies A ∈ I. Given a finite ground set N , an independence family I ⊆ 2 N , and a submodular set function f : 2 N → R, the problem consists of (approximately) solving max S∈I f (S).
A successful technique to tackle this problem in recent years has been the relaxation and rounding approach. It consists of first relaxing the discrete problem into a continuous version max x∈P I F (x), where F : [0, 1] N → R is a suitable continuous extension of f , and P I is a relaxation polytope of the independence family I 1 . The first step of the relaxation and rounding approach then approximately solves max x∈P I F (x) to obtain a fractional point x ∈ P I .
In order to get a feasible solution to the original problem, we then need to round this fractional point into an integral and feasible (i.e., independent) one while keeping the objective value as high as possible. Contention resolution schemes are a powerful tool to tackle this problem, and have found other applications outside of submodular maximization [1,10,13].
At the high level, given a fractional point x, the procedure first generates a random set R(x) by independently including each element i with probability x i . Since R(x) might not necessarily belong to I, the contention resolution scheme then removes some elements from it in order to get an independent set. We denote the support of a point x by supp(x) := {i ∈ N | x i > 0}. A CR scheme is then formally defined as follows.
Definition 1.1 (CR scheme). π = (π x ) x∈P I is a c-balanced contention resolution scheme for the polytope P I if for every x ∈ P I , π x is an algorithm that takes as input a set A ⊆ supp(x) and outputs an independent set π x (A) ∈ I contained in A such that Moreover, a contention resolution scheme is monotone if for any x ∈ P I : Notice that a contention resolution scheme can have a different algorithm for each x ∈ P I . A c-balanced contention resolution scheme ensures that every element in the random set R(x) is kept with probability at least c. The goal when designing CR schemes is thus to maximize such c, known as balancedness.
Moreover, monotonicity is a desirable property for a c-balanced CR scheme to have, since one can then get approximation guarantees for the constrained submodular maximization problem via the relaxation and rounding approach (see [7] for more details).
A closely related notion to contention resolution schemes is the correlation gap, originally introduced in [2] and extended to constraint families I ⊂ 2 N in [7]. Formally, the correlation gap of a family I ⊆ 2 N is given by where conv(I) denotes the convex hull of the set {1 S : S ∈ I}. This definition is the one given in [7], while the original definition from [2] uses the inverse ratio. The connection between CR schemes and the correlation gap is then summarized by the following result of [7,Theorem 4.6]: the correlation gap of I is equal to the maximum c such that I admits a c-balanced (but not necessarily monotone) CR scheme. By presenting a variety of CR schemes for different constraints, the work in [7] gives improved approximation algorithms for linear and submodular maximization problems under matroid, knapsack, matchoid constraints, as well as their intersections. 2 CR schemes have also been studied for other types of independence families [6,11], or by having the elements of the random set R(x) arrive in an online fashion [3,10,13,1,14,4,12]. In this work, we restrict our attention to matroid constraints and the offline setting (i.e., we know the full set R(x) in advance).
A monotone CR scheme with a balancedness of 1 − (1 − 1/n) n for the uniform matroid of rank one is given in [8,9], where n denotes the size of the ground set. It is also shown that this is optimal. That is, there is no c-balanced CR scheme for the uniform matroid of rank one with c > 1 − (1 − 1/n) n . The work of [7] extends this result by proving the existence of a monotone 1 − (1 − 1/n) n -balanced CR scheme for any matroid. This requires defining an exponential-sized linear program and using its dual. The existence argument can then be turned into an efficient procedure by using random sampling and the ellipsoid algorithm to construct an efficient CR scheme with a balancedness of 1 − (1 − 1/n) n − , running in time polynomial in the input size and 1/ for any fixed > 0. Since 1 − (1 − 1/n) n converges to 1 − 1/e, this corresponds to an efficient asymptotically optimal CR scheme with a balancedness of 1 − 1/e.
For the uniform matroid of rank k (i.e., cardinality constraints), the work of [18] establishes a correlation gap of 1 − e −k k k /k!. Combining this with a reduction from [7] proves the existence of a (1 − e −k k k /k!)-balanced CR scheme, and the asymptotic optimality of this bound. The existence of such a scheme can also be obtained by combining the same reduction from [7] and a result of [5]. However, the main drawback of these approaches is their lack of simplicity.
In the setting where the elements of R(x) arrive in an online adversarial fashion, the work of [3] gives a procedure with a balancedness of 1 − 1/ √ k + 3 for uniform matroids of rank k. It remained unknown whether this bound was tight, until the recent work of [12] settled this question. They show that the optimal balancedness in this setting is strictly better than 1 − 1/ √ k + 3, and strictly worse than 1 − e −k k k /k!. In contrast, for the case where the elements arrive in a random fashion, it has recently been shown that the optimal balancedness is 1 − e −k k k /k! [4].

Our contributions
Our main result is to provide a simple, explicit, and optimal monotone CR scheme for the uniform matroid of rank k on n elements, with a balancedness of c(k, n) := 1 − n k 1 − k n n+1−k k n k . This result is encapsulated in Theorem 2.1 (balancedness), Theorem 2.4 (optimality), and Theorem 2.5 (monotonicity). This generalizes the balancedness factor of 1 − (1 − 1/n) n given in [8,9] for the rank one (i.e., k = 1) case. Moreover, for a fixed value of k, we have that c(k, n) converges from above to 1 − e −k k k /k!. While it is possible to prove the existence of a (1 − e −k k k /k!)-balanced CR scheme by combining results from [7,18], these require defining an exponential-sized linear program and using its dual. In addition, to turn this existence proof into an actual algorithm, one needs to use random sampling and the ellipsoid method. The advantage of our CR scheme is thus that it is a very simple and explicit procedure. Moreover, our balancedness is an explicit formula which also depends on n (the number of elements) in addition to k, and c(k, n) > 1 − e −k k k /k! for every fixed n. We also discuss how the above CR scheme for uniform matroids naturally generalizes to partition matroids.

Preliminaries on matroids
This section provides a brief background on matroids. A matroid M is a pair (N, I) consisting of a ground set N and a non-empty family of independent sets I ⊆ 2 N which satisfy: • If A ∈ I and B ⊆ A, then B ∈ I.
• If A ∈ I and B ∈ I with |A| > |B|, then ∃ i ∈ A\B such that B ∪ {i} ∈ I.
Given a matroid M = (N, I) its rank function r : 2 N → R ≥0 is defined as r(A) = max{|S| : S ⊆ A, S ∈ I}. Its matroid polytope is given by The next two classes of matroids are of special interest for this work.
Example 1.1 (Uniform matroid). The uniform matroid of rank k on n elements U k n := (N, I) is the matroid whose independent sets are all the subsets of the ground set of cardinality at most k. That is, I := {A ⊆ N : |A| ≤ k}. Its matroid polytope is Example 1.2 (Partition matroid). Partition matroids are a generalization of uniform matroids, where the ground set is partitioned into k blocks: N = D 1 · · · D k and each block D i has a certain capacity d i ∈ Z ≥0 . The independent sets are then defined to be I : The uniform matroid U k n is simply a partition matroid with one block N and one capacity k. Moreover, the restriction of a partition matroid to each block D i is a uniform matroid of rank d i on the ground set D i .
2 An optimal monotone contention resolution scheme for uniform matroids We assume throughout this whole section that n ≥ 2 and that k ∈ {1, . . . , n − 1}. We denote by P I the matroid polytope of U k n . For any point x ∈ P I , let R(x) be the random set satisfying P[i ∈ R(x)] = x i independently for each coordinate. If the size of R(x) is at most k, then R(x) is already an independent set and the CR scheme returns it. If however |R(x)| > k, then the CR scheme returns a random subset of k elements by making the probabilities of each subset of k elements depend linearly on the coordinates of the original point x ∈ P I . More precisely, given an arbitrary x ∈ P I , for any set A ⊆ supp(x) with |A| > k and any subset B ⊆ A of size k, we define where we use the following notation:x(A) : |A| i∈A x i . We then define a randomized CR scheme π for U k n as follows. Algorithm 2.1 (CR scheme π for U k n ). We are given a point x ∈ P I and a set A ⊆ supp(x).
The above procedure can be implemented in O(n k ) time in the worst case, and hence gives a polynomial time algorithm for constant values of k. In Section 2.6 we discuss an alternative viewpoint of the scheme, which yields a more efficient implementation and a polynomial time algorithm even for non-constant k.
We next show that the above CR scheme is well-defined, i.e., that q A is a valid probability distribution.
Lemma 2.1. The above procedure π is a well-defined CR scheme. That is, ∀x ∈ P I and A ⊆ supp(x), we have q A (B) ≥ 0 and B⊆A,|B|=k q A (B) = 1. 1], it directly follows from the definition (2.1) that q A (B) ≥ 0. In order to prove the second claim, we need the equality B⊆A,|B|=k that we derive the following way: Hence, We now state our main result. Since we use the above expression often throughout this section, we denote it by We note that setting k = 1 gives c(1, n) = 1 − (1 − 1/n) n , which matches the optimal balancedness for U 1 n provided in [8,9]. This converges to 1 − 1/e when n gets large. Proposition 2.1. For a fixed k, the limit of c(k, n) as n tends to infinity is Moreover, c(k, n) is monotonically decreasing with n for a fixed k.
Proof. We use Stirling's approximation, which states that: This means that these two quantities are asymptotic, i.e., their ratio tends to 1 if we tend n to infinity. By (2.3), we get Using the above expression leads to the desired result: In order to prove that c(k, n) is monotonically decreasing with n, we now show that By expanding this expression, we get where the function g(x) is defined as g(x) := (x/(x − 1)) x . In order for (2.4) to hold, it now suffices to show that g(x) is monotonically decreasing for x > 1. We do that by showing that the derivative of the logarithm is strictly negative.
where the last inequality follows from the fact that log(1 + y) < y for any y > 0. This shows that (2.4) holds and hence that c(k, n) is monotonically decreasing with n.

Outline of the proof of Theorem 2.1
Throughout this whole section on uniform matroids, we fix an arbitrary element e ∈ N . In order to prove Theorem 2.1, we need to show that for every x ∈ P I with x e > 0 we have P[e ∈ π x (R(x)) | e ∈ R(x)] ≥ c(k, n). This is equivalent to showing that for every x ∈ P I with x e > 0 we have We now introduce some definitions and notation that will be needed. For any is the random set obtained by rounding each coordinate of x| A in the reduced ground set A to one independently with probability We do not write the dependence on x ∈ P I for simplicity of notation. We mainly work on the set N \ {e}. For this reason, we define S := N \ {e}. Note that |S| = n − 1; we use this often in our arguments.
With the above notation we can rewrite the probability in (2.5) in a more convenient form. For any x ∈ P I satisfying x e > 0, we get The obtained expression is a multivariable function of the variables x 1 , . . . , x n , since p S (A) and q A∪e (B) depend on those variables as well. We denote it as follows.
One then has that for proving Theorem 2.1 it suffices to show the following.
Indeed, Theorem 2.2 implies that for every x ∈ P I we have G(x) ≤ 1−c(k, n), with equality holding if x = (k/n, . . . , k/n). In particular, for any x ∈ P I satisfying x e > 0, we get which proves Theorem 2.1 by (2.5).
Notice that for the conditional probability to be well defined, we need the assumption that x e > 0. However, in our case G(x) is simply a multivariable polynomial function of the n variables x 1 , . . . , x n and is thus also defined when x e = 0. We may therefore forget the conditional probability and simply treat Theorem 2.2 as a multivariable maximization problem over a bounded domain. We now state the outline of the proof for Theorem 2.2.
We first maximize G(x) over the variable x e , and get an expression depending only on the xvariables in S. This is done in Section 2.2. We then maximize the above expression over the unit hypercube [0, 1] S (see Section 2.3). Finally, we combine the above two results to show that the maximum in Theorem 2.2 is attained at the point x i = k/n for every i ∈ N ; this is done in Section 2.4.

Maximizing over the variable x e
The matroid polytope of U k n is given by We define a new polytope by removing the constraint x e ≤ 1 from P I : Clearly, P I ⊆ P I . We now present the main result of this section, where we consider the maximization problem max{G(x) | x ∈ P I } and maximize G(x) over the variable x e while keeping all the other variables (x i for every i ∈ S) fixed to get an expression depending only on the x-variables in S. (2.7) Moreover, equality holds when x e = k − x(S). Proof.
We now maximize this expression with respect to the variable x e over P I while keeping all the other variables fixed. Since this is a linear function of x e and the coefficient of x e is positive, the maximal value will be x e = k − x(S) in order to satisfy the constraint x(N ) ≤ k. Note that this was the reason for the definition of P I , since k − x(S) might not necessarily be smaller than 1. We thus plug-in x e = k − x(S) in (2.8) and write an inequality to emphasize that the derivation holds for any x ∈ P I .

(2.9)
Notice the only part which depends on B in the last summation is x(B). By using Equation (2.2) and noticing that B⊆A,|B|=k 1 = |A| k , we get Now, note that by definition of the term p S (A), we have We compute the middle term in (2.10) by plugging in (2.11) and the change of variable B := A ∪ i.
We finally plug-in (2.12) into (2.10) and use A⊆S,|A|≥k = A⊆S,|A|≥k+1 + A⊆S,|A|=k to get Notice that the only place where we used an inequality was from (2.8) to (2.9). Hence equality holds when x e = k − x(S).

Maximizing
In this section, we turn our attention into maximizing the right-hand side expression in (2.7) over the unit hypercube [0, 1] S . In fact, we work with the following function instead: A plot of h k S (x) for S = {1, 2} and k = 1, 2 is presented in Figure 1. Note that the above function is simply the right hand side of (2.7) multiplied by k. Hence, maximizing one or the other is equivalent.
Theorem 2.3. Let n ≥ 2, so that |S| = n − 1 ≥ 1 and k ∈ {1, . . . , n − 1}. Then the function h k S (x) attains its maximum over the unit hypercube [0, 1] S at the point (k/n, . . . , k/n) with value h k S k/n, . . . , k/n = k For simplicity, we denote this maximum value by: Notice that h 0 S (x) = h n S (x) = 0 for any x ∈ [0, 1] S . Hence Theorem 2.3 holds for k = 0 and k = n as well. Moreover, the function h k S (x) also satisfies an interesting duality property: . In order to prove Theorem 2.3, we first show that h has a unique extremum (in particular a local maximum) in the interior of [0, 1] S at the point (k/n, . . . , k/n) -see Proposition 2.2. We then use induction on n to show that any point in the boundary of [0, 1] S has a lower function value than h k S (k/n, . . . , k/n). Since our function is continuous over a compact domain, it attains a maximum. That maximum then has to be attained at (k/n, . . . , k/n) by the two arguments above. That is, the unique extremum cannot be a local minimum or a saddle point. Otherwise, since there are no more extrema in the interior and the function is continuous, the function would increase in some direction leading to a point in the boundary having higher value. For completion, in the appendix, we present another proof showing local maximality that relies on the Hessian matrix. For proving Proposition 2.2 we need the following lemma. We leave its proof to the appendix.
The above formula actually holds for h k A with any A ⊆ N . We use this in Section 2.5 with A = N . We are now able to prove Proposition 2.2.  Note that for a set A ⊆ S such that i ∈ A, we have For a set A ⊆ S such that i / ∈ A, we get We then have where we use Lemma 2.3 in the last line. From the above it follows that ∇h k S (x) = 0 if and only if Notice that We can see this implies that such a solution lies on the boundary of [0, 1] S , since there exists an index j ∈ S such that x j = 0 or x j = 1. Since we are focusing on extrema in the interior, we may disregard that solution. Hence, by (2.13), we have All the x i are thus equal and by setting x i = t for every i ∈ S, we get Therefore, h k S (x) has a unique extremum in the interior of [0, 1] S at the point (k/n, . . . , k/n). In order to prove Theorem 2.3, we need one additional lemma; we leave its proof to the appendix. Proof of Theorem 2.3. We prove the statement by induction on n ≥ 2. The base case corresponds to n = 2 and k = 1. In this case, we get S = {1} and h k S (x) = x 1 (1 − x 1 ). It is easy to see that this is a parabola which attains its maximum at the point x 1 = 1/2 over the unit interval [0, 1]. Moreover the function value at that point is 1/4 = α (1, 2).
By Proposition 2.2, h k S (x) has a unique extremum in the interior of [0, 1] S at the point (k/n, . . . , k/n). We first show that the function h k S (x) evaluated at that point is indeed equal to α(k, n).
We next show that any point on the boundary of [0, 1] S has a lower function value than α(k, n). A point x ∈ [0, 1] S lies on the boundary if there exists i ∈ S such that x i = 0 or x i = 1.
• Suppose there exists i ∈ S such that x i = 0. For any set A ⊆ S containing i, we get p S (A) = 0. Hence: If k = n − 1, then h k S\i (x) = 0. We then clearly get h k S (x) = h k S\i (x) = 0 < α(k, n). If k < n − 1, then by induction hypothesis and Lemma 2.4, • Suppose there exists i ∈ S such that x i = 1. For any set A ⊆ S not containing i, we get p S (A) = 0. Hence:

then by induction hypothesis and Lemma 2.4,
Since our function is continuous over a compact domain, it attains a maximum. By using continuity, together with the facts that (k/n, . . . , k/n) is the unique extremum in the interior, and that it has higher function value than any point at the boundary, it follows that (k/n, . . . , k/n) must be a global maximum. This completes the proof.

Proof of Theorem 2.1
We now have all the ingredients to prove Theorem 2.2 and, therefore, Theorem 2.1. The two main building blocks for the proof are Lemma 2.2 and Theorem 2.3.
Proof of Theorem 2.2. By Lemma 2.2, we get that for any x ∈ P I (since P I ⊆ P I ), Moreover, for every x ∈ P I satisfying x e = k − x(S), equality holds in (2.16). By Theorem 2.3, we get that for any x ∈ P I , Equality holds in (2.17) if x i = k/n for every i ∈ S. This holds because the above expression does not depend on x e , and the projection of the polytope P I to the S coordinates is included in the unit hypercube [0, 1] S . Therefore, by combining (2.16) and (2.17), we get that G(x) ≤ 1 − c(k, n) for every x ∈ P I . Moreover, for the point x i = k/n for every i ∈ N , equality holds: G(k/n, . . . , k/n) = 1 − c(k, n). Indeed, (2.16) holds with equality because x e = k−x(S) is satisfied (since k−x(S) = k−(n−1)k/n = k/n), and (2.17) also holds with equality because x i = k/n for every i ∈ S.

Optimality
In this section, we argue that a balancedness of c(k, n) is in fact optimal for U k n . Our bound is more refined than the one given in [18], in the sense that it depends on both k and n. The proof uses a similar argument to the one used for U 1 n in [7]. It relies on computing the value E r(R(x) , i.e., the expected rank of the random set R(x). However, for values of k > 1, the argument becomes more involved than the one presented in [7]. Our proof uses Lemma 2.3.
Proof of Theorem 2.4. Let π be an arbitrary c-balanced CR scheme for U k n , and fix the point x such that x i = k n for every i ∈ N. Clearly, x ∈ P I = {x ∈ [0, 1] N : x 1 + · · · + x n ≤ k}. Let R(x) be the random set satisfying P[i ∈ R(x)] = x i for each i independently, and denote by I := π x (R(x)) the set returned by the CR scheme π. By definition of a CR scheme, we have E[|I|] ≤ E r(R(x)) and E[|I|] = i∈N P[i ∈ I] ≥ i∈N c x i = nck n = ck. It follows that c ≤ E r(R(x)) /k. Moreover, recall that We then have where the last two equalities in the third line follow by (2.18) and Corollary 2.1 respectively. Combining this with the bound c ≤ E r(R(x)) /k leads to the desired result.

Marginal viewpoint of CR schemes and efficient implementation
An important object in the design of CR schemes are the marginals. Given a CR scheme π, a vector x ∈ P I , and a set A ⊆ supp(x), the CR scheme returns a (potentially random) set π x (A) ∈ I with π x (A) ⊆ A. This defines a probability distribution over subsets of A, and hence also a set of marginals. More precisely, the marginals of π x (A) are given by . It is immediate that y A x ∈ P I , since π x (A) ∈ I. Marginals are heavily used in [6] to design an optimal CR scheme for bipartite matchings. Our next result provides an explicit formula for the marginals of the CR scheme described in Algorithm 2.1. We postpone its proof to the appendix.
Lemma 2.5. The marginals of the CR scheme defined in Algorithm 2.1 satisfy: .
We now discuss an alternative viewpoint of the scheme provided in Algorithm 2.1, which yields a more efficient implementation and a polynomial time algorithm for any value of k. We use the marginal viewpoint of CR schemes described in [6, see Section 2 and Proposition 1]. As pointed out there, while the marginals carry less information than the CR scheme, the balancedness and monotonicity properties can be determined solely from the marginals -see Definition 1.1. In particular, if two CR schemes have the same set of marginals, then they also have the same balancedness, and one scheme is monotone if and only if the other one is. Lemma 2.5 gives an explicit formula for the marginals y A x of the CR scheme defined in Algorithm 2.1. Using this, one can design an efficient CR scheme with the same marginals as follows: First decompose y A x ∈ P I into a convex combination of vertices of the polytope: . Then, output the set A i with probability λ i . Note that since the λ i 's are a convex combination, the above sampling procedure is a well-defined probability distribution. Moreover, the convex combination y A x = i∈[m] λ i 1 Ai can be computed efficiently via standard methods -see for instance [17,Corollary 40.4a] or [16,Corollary 14.1f and 14.1g].

Monotonicity
We next argue that Algorithm 2.1 is a monotone CR scheme. This is a desirable property for CR schemes, since they can then be used to derive approximation guarantees for constrained submodular maximization problems. We need Lemma 2.5, i.e., the marginals of the CR scheme.
Theorem 2.5. Algorithm 2.1 is a monotone CR scheme for U k n . That is, for every x ∈ P I and e ∈ A ⊆ B ⊆ supp(x), we have P[e ∈ π x (A)] ≥ P[e ∈ π x (B)].
Proof. Let A ⊆ supp(x) and e ∈ A. If |A| ≤ k, then P[e ∈ π x (A)] = 1, and the theorem trivially holds. We therefore suppose that |A| > k. In order to prove the theorem, it is clearly enough to show that for any f ∈ supp(x) \ A, (

2.19)
We show the difference of those two terms is greater than 0 by using Lemma 2.5 for both terms: The last inequality holds because since and all the other terms are positive. We have thus shown (2.19) which is enough to prove the theorem.

Extension to partition matroids
A CR scheme for uniform matroids can be naturally extended to a CR scheme for partition matroids. This is not surprising since partition matroids can be seen as a direct sum of uniform matroidssee Example 1.2. For completeness, in this section we discuss how the results from Section 2 lead to an optimal CR scheme for partition matroids. This is encapsulated in the following two results. Proof. For each i ∈ [k], let π i be an α(d i , |D i |)-balanced CR scheme for the uniform matroid U di |Di| . Let P i denote the matroid polytope of the uniform matroid U di |Di| , and P I denote the matroid polytope of the partition matroid M = (N, I). Given any x ∈ P I , let x i ∈ [0, 1] Di denote the restriction of x to D i . Since x ∈ P I , it is clear that Consider the CR scheme π defined as follows, π x (A) = i∈[k] π i x i (A ∩ D i ). That is, we run the CR schemes π i independently in each partition D i and take the (disjoint) union of their outputs. Let e ∈ N be such that e ∈ D i . Then, P[e ∈ π x (R(x))] = P[e ∈ π i x i (R(x) ∩ D i )] ≥ α(d i , |D i |) · x e . Hence, it follows that π is an α-balanced CR scheme for M, where α = min i∈[k] α(d i , |D i |).
For the monotonicity part, assume that the CR schemes π i defined above are all monotone. Let x ∈ P I , e ∈ A ⊆ B, and i ∈ [k] be the unique index such that e ∈ D i . Then, . Hence π is also monotone. Proof. Assume such a CR scheme π exists, and let j ∈ argmin i∈{1,...,k} α(d i , |D i |). Let P denote the matroid polytope of the uniform matroid U dj |Dj | , and let P I denote the matroid polytope of the partition matroid M = (N, I). Then, for anyx ∈ P , let x ∈ [0, 1] N be defined as x e =x e if e ∈ D j , and x e = 0 otherwise. Clearly, x ∈ P I sincex ∈ P . Hence, P[e ∈ π x (R(x)) | e ∈ R(x)] ≥ α = α(d j , |D j |). But this contradicts the assumption that there is no α(d j , |D j |)-balanced CR scheme for the uniform matroid U dj |Dj | .

Conclusion
Contention resolution schemes are a general and powerful tool for rounding a fractional point in a relaxation polytope. It is known that matroids admit (1 − 1/e)-balanced CR schemes, and that this is the best possible. This impossibility result is in particular true for uniform matroids of rank one. For uniform matroids of rank k (i.e., cardinality constraints), one can get a (1 − e −k k k /k!)balanced CR scheme by combining a reduction from [7] and a result from [18]. The main drawback of this approach, however, is its lack of simplicity. In this work, we provide an explicit and much simpler scheme with a balancedness of c(k, n) := 1− n k 1 − k n n+1−k k n k . In particular, c(k, n) > 1−e −k k k /k! for every n, and c(k, n) converges to 1−e −k k k /k! as n goes to infinity. Our balancedness is therefore better for every fixed n, and achieves 1 − e −k k k /k! asymptotically. We also show optimality and monotonicity of our scheme, and discuss how it naturally extends to an optimal CR scheme for partition matroids. We believe that finding other classes of matroids where the 1 − 1/e balancedness factor can be improved is an interesting direction for future work. Moreover, while this work focused on the offline setting, this question can also be studied in the context where the elements of R(x) arrive in an online fashion (e.g., in the case of random or online contention resolution schemes). Finally, it would also be interesting to see whether a simpler proof of the optimality of our algorithm can be obtained.
We can rewrite this recursive formula as By summing both sides from 0 to k − 1 and noticing that h 0 S (x) = 0, we get the desired result.

Proof of Lemma 2.4
Proof of Lemma 2.4. First, notice that the function g(x) := x−1 x x is strictly increasing for x ≥ 1.
• If µ = 1, the corresponding eigenspace is {v ∈ R n−1 | e T v = 0}. This eigenspace is a hyperplane of dimension n − 2, which means that there exists n − 2 linearly independent eigenvectors corresponding to the eigenvalue µ = 1.
Hence, the spectrum of A is equal to {1, n}, where the multiplicity of the eigenvalue 1 is n − 2, whereas the multiplicity of the eigenvalue n is 1. We have therefore just proven that A is positive-definite, which, by (A.8), implies that H(k/n, . . . , k/n) is negative-definite and concludes the proof.