Equitable orientations of sparse uniform hypergraphs

Caro, West, and Yuster studied how $r$-uniform hypergraphs can be oriented in such a way that (generalizations of) indegree and outdegree are as close to each other as can be hoped. They conjectured an existence result of such orientations for sparse hypergraphs, of which we present a proof.


Introduction
In [1], Caro, West, and Yuster presented a generalization to hypergraphs of the notion of orientation defined for graphs. Their acknowledged purpose is to study how hypergraphs can be oriented in such a way that minimum and maximum degree are close to each other, knowing that reaching an additive difference of 1 is always achievable in the case of graphs. Identifying an orientation of an edge with a total ordering of its elements, they define a notion of degree on oriented r-uniform hypergraphs. Definition 1. Let H be a r-uniform hypergraph, and let every S ∈ H define a total order on its elements as a bijection σ S : S → [r]. The degree d P (U ) of a set of vertices U ⊆ V (H) with respect to a set of positions P ⊆ [r] (where |U | = |P |) is equal to: d P (U ) = |{S ∈ H : U ⊆ S and σ S (U ) = P }| From there they define equitable orientations: Definition 2. The orientation of a r-uniform hypergraph H is said to be p-equitable if |d P (U ) − d P (U )| 1 for any choice of U ⊆ V (H) and P, P ⊆ [r] of cardinality p. It is said to be nearly p-equitable if the looser requirement |d P (U ) − d P (U )| 2 holds.
They gave proof that all hypergraphs admit a 1-equitable as well as a (r − 1)-equitable orientation, and also proved that some hypergraphs do not admit a p-equitable orientation for all values of p. Additionally, they parameterized the notion of maximum degree in order to focus on hypergraphs which are sparse with respect to the problem at hand: Thus, they proved that for any fixed value of p and k, and for every sufficiently large integer r, every r-uniform hypergraph H with ∆ p (H) k admits a nearly p-equitable orientation. They conjectured that this setting actually ensured the existence of a pequitable orientation, which we prove here.
Theorem 3. Let p, k be fixed integers. There exists r 0 such that for every r r 0 , every r-uniform hypergraph with ∆ p (H) k admits a p-equitable orientation.
Note that, as r is big compared to ∆ p (H), a p-equitable orientation means that d P (U ) is equal to 0 or 1 for every choice of set of positions P and set of vertices U .
In order to prove the existence of nearly p-equitable orientation, Caro, West, and Yuster [1] used the Lovász Local Lemma. In [3], Moser and Tardos presented an elegant algorithmic proof of it which developed the technique of entropy compression. Our proof uses that technique and the following Lemma (proved in Section 3) that counts what can be seen as a generalization of derangements.
Lemma 4. Let p, k ∈ N and α < 1 be fixed. Let X be a set of cardinality r and let L S be, for every S ∈ X p , a collection of p-subsets of X with |L S | k. Then, if no p-subset occurs in more than r α of the L S , a random permutation σ of X satisfies σ(S) ∈ L S for every S with probability (1 − 2k/ r p ) ( r p ) = e −2k + o(1) when r grows large.

Algorithm
In what follows, we assume that every finite set S has an implicit enumeration on its elements, and in particular that the edges of a hypergraph H are implicitly ordered. We will say that i represents an element s ∈ S when s is the i-th element of S in this implicit ordering. We will orient the edges of H one by one as a (partial) equitable orientation of H, i.e. in such a way that any p-subset of V (H) never appears more than once at the same position among the oriented edges. To do so, we require the partial orientation to enforce an additional property.
Definition 5. Let H be a partially oriented r-uniform hypergraph. We say that an edge S ∈ H is pressured by a family {S 1 , . . . , S l } of edges (oriented by σ S 1 , . . . , σ S l ) if there exists P ∈ [r] p such that σ −1 S i (P ) ⊆ S for every i. Note that Lemma 11 ensures that a partial orientation of H can be extended to an unoriented edge S, provided that no family of more than r α oriented edges pressures S. It asserts, for c < e −2k and r sufficiently large, that at least cr! orientations of S are admissible for this extension: we name them good permutations of S. Algorithm 1 selects an ordering randomly among them, while ensuring that no other edge is pressured by a family of edges larger than r 1 = r α . Algorithm 1 starts with every edge being unoriented. At each step it orients the unoriented edge of smallest index by choosing a random permutation amongst the cr! first good permutations. We call bad event the event that an edge S ∈ H is pressured by a family {S 1 , . . . , S r 1 } of cardinality r 1 . If a bad event occurs after orienting S 1 , then the algorithm erases the orientation of the S 1 , . . . , S r 1 .

Data
It is trivial to see that Algorithm 1 only returns p-equitable orientations of H. Moreover, every time the algorithm chooses a random permutation, it does so among at least cr! good ones by Lemma 11. Note that we need to consider large families pressuring already oriented edges: indeed, we might have to cancel the orientation of such an edge to redefine it again later.
Theorem 6. Let p, k ∈ N, α, c ∈ R >0 with α < 1 and c < e −2k . For every sufficiently large r, there is a set of random choices for which Algorithm 1 terminates.
In order to prove this result we will analyse the possible executions of the M first steps of Algorithm 1. To this end we make it deterministic by defining a log (following the idea of [3]) and obtain Algorithm 2, in the following way: • Take as input a vector v ∈ [cr!] M which simulates the random choices.
• Output a log when it is not able to orient all edges.
We define a log of order M to be a triple (R, X, F ) where: • R is a binary word whose length lies between M and 2M .
The log of order M (or just log) is actually a trace of the deterministic algorithm's execution after M steps. Its objective is to encode which orientations get canceled during the algorithm's execution. We will show later that Algorithm 2 cannot produce the same log from two different input vectors v, v ∈ [cr!] M . and that, for M big enough, that the set of possible log is smaller than (cr!) M . We now describe the log and how Algorithm 2 produces it.
• R is initialized to the empty word. We append 1 to R whenever Algorithm 2 adds a new orientation; we append 0 whenever it cancels one.
• Consider the following bad event: after orienting S 1 , an edge S ∈ H is pressured by a family {S 1 , . . . , S r 1 } of cardinality r 1 . We note s i the set of vertices that S i maps to P . We associate the following 7-tuple which identifies the sets S i as well as their orientation: x 1 < r p represents the set s 1 among the r p possible subsets of size p of S 1 . x 2 < k identifies S as one of the (at most k) edges containing s 1 .
x 3 < ( r p ) r 1 −1 is an integer representing the set of subsets s 2 , . . . s r 1 amongst the r p subsets of size p of S. x 4 < k r 1 −1 is an integer representing the sequence (y 2 , . . . , y r 1 ) ∈ [k] r 1 −1 such that the y l -th edge containing s l is S l .
x 5 < p! r 1 −1 is an integer representing the sequence (p 1 , . . . , p r 1 ), where p i ∈ [p!] represents the subpermutation of S i onto s i (we know it's a permutation of P ).
x 7 < r! is the integer representing the permutation chosen for S 1 .
X is the list of the 7-tuples describing the bad events, in the order in which they happen.
• F is the integer representing the partial orientation of H (i.e. a choice among r! + 1 per edge of H) after M steps.
the electronic journal of combinatorics 22 (2015), #P00 This gives the following Algorithm 2: Data: We will show the following claim.
Claim 7. Let e be a vector in [cr!] M from which Algorithm 2 cannot produce a p-equitable orientation of H and outputs a log (R, X, F ). We can reconstruct e from (R, X, F ).
Proof of the claim. First we show that we can find, for every z M , the set C(z) of edges which are oriented after z steps. We proceed by induction on z, starting from C(0) = ∅. At step z + 1, Algorithm 2 chooses a orientation for the smallest index i not in C(z). If, in R, the (z + 1)-th 1 is not followed by a 0, then there is no bad event triggered by this step. In this case the set C(z + 1) is the set C(z) ∪ i. Suppose now that the (z + 1)-th 1 is followed by a sequence of 0: this means that the algorithm encountered a bad event.
By looking at the number of sequences of 0 in R before the z + 1-th 1 we can deduce the number of bad events before this one. This mean we can find, in X, the 7-tuple (x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 ) associated to this bad event. We take the following notations for the bad event : After orienting S 1 , an edge S of H is pressured by a family {S 1 , . . . S r 1 } of cardinality r 1 We note s i the subset of S i that are sent to P . S 1 is the last edge we oriented (known by induction), x 1 indicates s 1 amongst the subset of S, x 2 indicates S amongst the set of edges containing s 1 , We can now deduce the set S(z) of all chosen orientations after z steps. We also proceed by induction, this time starting from step M . By construction, F is exactly the integer representing the partial orientation of H at step M . If the last letter of R is a 1, this means the last step of the algorithm consisted only of choice of a orientation. We just showed that we know which orientation was chosen after M − 1 steps, so we can deduce the state of all orientation after M − 1 steps. If the last letter is a 0, Algorithm 2 encountered a bad event. Keeping the notation of the bad event, let (x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 ) be the 7-tuple associated to this bad event. Like before x 1 , x 2 , x 3 , x 4 and the knowledge of C(M − 1) allow us to know which permutations Algorithm 2 erased at this step. Moreover x 7 tells us the random choice made by Algorithm 2 and from x 7 and x 1 we can deduce P . For each s i we know the orientation chosen for S i at the step M − 1 sends P onto s i , from x 5 we deduce exactly in which order and from x 6 we get the rest of the orientation. Therefore we can deduce the set of chosen orientations before the bad event occurred. With the sets S(z) and C(z) known for all z M we can easily deduce e.
The previous claim has the following corollary: Proof. We will compute a bound for |L M |. R is a binary word of size 2M , and there are at most 4 M such words. X is a list of 7-tuples. As Algorithm 2 made M choices and each bad event removes r 1 of those, there exist at most M r 1 bad events. Moreover, for each 7-tuple, (x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 ) we have x 1 r p , x 2 k, x 3 ( r p ) r 1 −1 , x 4 k r 1 −1 , x 5 p! r 1 −1 , x 6 (r − p)! r 1 −1 , x 7 r!. Using the bounds n k ( n·e k ) k or n k n k we get the following bound.

Derangements
The results of this section are based on a lemma from Erdős and Spencer [2]: Lemma 10 (Lopsided Lovász Local Lemma). Let A 1 , . . . , A m be events in a probability space, each with probability at most p. Let G be a graph defined on those events such that for every A i , and for every set S avoiding both A i and its neighbours, the following relation holds: Then if 4dp 1, all the events can be avoided simultaneously: Thanks to this result we can prove the following, which can be seen as a generalization of the fact that a random permutation of n points is a derangement with asymptotic probability 1/e. Lemma 11. Let p, k ∈ N and α < 1 be fixed. Let X be a set of cardinality r and let L S be, for every S ∈ X p , a collection of p-subsets of X with |L S | k. Then, if no p-subset occurs in more than r α of the L S , a random permutation σ of X satisfies σ(S) ∈ L S for every S with probability (1 − 2k/ r p ) ( r p ) = e −2k + o(1) when r grows large.
Proof. For every S ∈ X p , we define the bad event B S with: Each B S has a probability P [B S ] k/ r p . On these bad events we define a lopsidependency graph (see [2]) G B with the following adjacencies: As a p-subset of X intersects at most O(r p−1 ) others, and noting that every p-subset can occur at most r α times, we have that: the electronic journal of combinatorics 22 (2015), #P00 In order to apply the Lopsided Lovász Local Lemma to the events B S and graph G B , we must ensure for every S ∈ X p and S B ⊆ V (G B )\N G B [B S ] that: Indeed, if we denote by T (for trace) the number of elements of B S ∈S B S sent by the random permutation σ into L S : As L S is disjoint from the L S , ∀B S ∈ S B , we have: And thus: In order to prove (1), we will first need the following observation: Claim 12. P (B S | T = t) is a decreasing function of t.
Proof of the claim. We compute the value of P (B S | T = t) exactly, denoting by r r the cardinality of B S ∈S B S . It is equal to 0 when t > r − p, and is otherwise equal to: Additionally, we will prove a relationship on the members of t P (T = t) and on those of t P (T = t | B S ∈S BB S ), which both sum to 1.
Proof of the claim. According to Bayes' Theorem applied to the right side of the equation, We thus only need to ensure the following, which is a consequence of Lemma 14: We are now ready to prove (1), and we define d t for every t where P (T = t) is nonzero: Because d t is a difference of probability distributions the sum t d t is null, and we can rewrite (1) using d t : We will thus prove that the sum t d t P (B S | T = t) is nonnegative. It is a consequence of Claim 13 that all nonnegative values of d t appear before all nonpositive ones, and so that there is a t 0 such that d t 0 iff t t 0 . As a result, | t t 0 d t | = | t>t 0 d t | = 1 2 and we can write: The second hypothesis of Lemma 10 is that 4pd 1, which translates in our case to 4 k ( r p ) o(r p ) = o(1) and is thus satisfied when r grows large. Hence, we have that: Proof. We implicitly assume in this proof that the conditionning event has a nonzero probability for t and t + 1. Let S 1 , S 2 be two sets of cardinality |A | with symmetric difference S 1 ∆S 2 = {x, y} where x ∈ S 2 is an element of B\B . Let σ xy be the permutation transposing x and y. Then, We are now ready to derive the result: Using our previous remark, we find an upper bound on the last term of the equation by averaging it over sets S obtained from S by the exchange of two elements: