1 Introduction

Many security definitions come in two flavors: a stronger “adaptive” flavor, where the adversary can arbitrarily make various choices during the course of the attack, and a weaker “selective” flavor where the adversary must commit to some or all of his choices a-priori. For example, in the context of identity-based encryption, selective security requires the adversary to decide on the identity of the attacked party at the very beginning of the game whereas adaptive security allows the attacker to first see the master public key and some secret keys before making this choice. Often, it appears to be much easier to achieve selective security than it is to achieve adaptive security.

A series of recent works achieves adaptive security in several such scenarios where we previously only knew how to achieve selective security: generalized selective decryption (GSD) [8, 23], constrained PRFs [9], and garbled circuits [16]. Although some of these works suggest a vague intuition that there is a general technique at play, there was no attempt to make this precise and to crystallize what the technique is or how these results are connected. In this work we present a new framework that connects all of these works and allows us to present them in a unified and simplified fashion. Moreover, we use the framework to derive a new result for adaptively secure secret sharing over access structures defined via monotone circuits.

At a high level, our framework carefully combines two basic tools commonly used throughout cryptography: random guessing (of the adaptive choices to be made by the adversary)Footnote 1 and the hybrid argument. Firstly, “random guessing” gives us a generic way to qualitatively upgrade selective security to adaptive security at a quantitative cost in the amount of security. In particular, assume we can prove the security of a selective game where the adversary commits to n-bits of information about his future choices. Then, we can also prove adaptive security by guessing this commitment and taking a factor of \(2^n\) loss in the security advantage. However, this quantitative loss is often too high and hence we usually wish to avoid it or at least lower it. Secondly, the hybrid argument allows us to prove the indistinguishability of two games \(\mathsf{G}_\mathsf{L}\) and \(\mathsf{G}_\mathsf{R}\) by defining a sequence of hybrid games \(\mathsf{G}_\mathsf{L}\equiv \mathsf{H}_0, \mathsf{H}_1,\ldots ,\mathsf{H}_\ell \equiv \mathsf{G}_\mathsf{R}\) and showing that each pair of neighboring hybrids \(\mathsf{H}_i\) and \(\mathsf{H}_{i+1}\) are indistinguishable.

Our Framework. Our framework starts with two adaptive games \(\mathsf{G}_\mathsf{L}\) and \(\mathsf{G}_\mathsf{R}\) that we wish to show indistinguishable but we don’t initially have any direct way of doing so. Let \(\mathsf{H}_\mathsf{L}\) and \(\mathsf{H}_\mathsf{R}\) be selective versions of the two games respectively, where the adversary initially has to commit to some information \(w \in \{0,1\}^n\) about his future choices. Furthermore, assume there is some sequence of selective hybrids \(\mathsf{H}_\mathsf{L}= \mathsf{H}_0, \mathsf{H}_1, \ldots , \mathsf{H}_\ell \equiv \mathsf{H}_\mathsf{R}\) such that we can show that \(\mathsf{H}_i\) and \(\mathsf{H}_{i+1}\) are indistinguishable. A naïve combination of the hybrid argument and random guessing shows that \(\mathsf{G}_\mathsf{L}\) and \(\mathsf{G}_\mathsf{R}\) are indistinguishable at a factor of \(2^n\cdot \ell \) loss in security, but we want to do better.

Recall that the hybrids \(\mathsf{H}_i\) are selective and require the adversary to commit to w. However, it might be the case that for each i we can prove that \(\mathsf{H}_i\) and \(\mathsf{H}_{i+1}\) would be indistinguishable even if the adversary didn’t have to commit to all of w but only some partial-information \(h_i(w) \in \{0,1\}^m\) for \(m \ll n\) (formalizing this condition precisely requires great care and is the major source of subtlety in our framework). Notice that the partial information that we need to know about w may be completely different for different pairs of hybrids, and if we look across all hybrids then we may need to know all of w. Nevertheless, we prove that this suffices to show that the adaptive games \(\mathsf{G}_\mathsf{L}\) and \(\mathsf{G}_\mathsf{R}\) are indistinguishable with only a \(2^m\cdot \ell \ll 2^n\cdot \ell \) loss of security.

Applications of Our Framework. We show how to understand all of the prior works mentioned above as applications of our framework. In many cases, this vastly simplifies prior works. We also use the framework to derive a new result, proving the adaptive security of Yao’s secret sharing scheme for access structures defined via monotone circuits.

In all of the examples, we get a series of selective hybrids \(\mathsf{H}_1,\ldots ,\mathsf{H}_\ell \) that correspond to pebbling configurations in some graph pebbling game. The amount of information needed to show that neighboring hybrids \(\mathsf{H}_i\) and \(\mathsf{H}_{i+1}\) are indistinguishable only depends on the configuration of the pebbles in the i’th step of the game. Therefore, using our framework, we translate the problem of coming up with adaptive security proofs to the problem of coming up with pebbling strategies that only require a succinct representation of each pebbling configuration.

We now proceed to give a high level overview of each of our results applying our general framework to specific problems, and refer to the main body for technical details.

1.1 Adaptive Secret Sharing for Monotone Circuits

Secret sharing schemes, introduced by Blakley [4] and Shamir [27], are methods that enable a dealer, that has a secret piece of information, to distribute this secret among n parties such that a “qualified” subset of parties has enough information to reconstruct the secret while any “unqualified” subset of parties learns nothing about the secret. The monotone collection of “qualified” subsets is known as an access structure. Any access structure admits a secret sharing scheme but the share size could be exponential in n [14]. We are interested in efficient schemes in which the share size is polynomial (in n and possibly in a security parameter).

Many of the classical schemes for secret sharing are perfectly (information theoretically) secure. The largest class of access structures that admit such a (perfect and efficient) scheme was obtained by Karchmer and Wigderson [18] for the class of all functions that can be computed by monotone span programs. This result generalized a previous work of Benaloh and Leichter [3] (which, in turn, improved a result of Ito et al. [14]) that showed the same result but for a smaller class of access structures: those functions that can be computed by monotone Boolean formulas. Under cryptographic hardness assumptions, efficient schemes for more general access structures are known (but security is only for bounded adversaries). In particular, in an unpublished work (mentioned in [1], see also Vinod et al. [28]), Yao showed how to realize schemes for access structures that are described by monotone circuits. This construction could be used for access structures which are known to be computed by monotone circuits but are not known to be computed by monotone span programs, e.g., directed connectivity [17, 24].Footnote 2 Komargodski et al. [21] showed how to realize the class of access structures described by monotone functions in \(\mathsf {NP}\) Footnote 3 under the assumption that witness encryption for \(\mathsf {NP}\) [10] and one-way functions exist.Footnote 4 \(^,\) Footnote 5

Selective vs. Adaptive Security. All of the schemes described above guarantee security against static adversaries, where the adversary chooses a subset of parties it controls before it sees any of the shares. A more natural security guarantee would be to require that even an adversary that chooses its set of parties in an adaptive manner (i.e., based on the shares it has seen so far) is unable to learn the secret (or any partial information about it).

It is known that the schemes that satisfy perfect security (including the works [3, 14, 18] mentioned above) actually satisfy this stronger notion of adaptive security. However, the situation for the schemes that are based on cryptographic assumptions (including Yao’s scheme and the scheme of [21]) is much less clear. Using random guessing (see Lemma 1) it can be shown that these schemes are adaptively secure, but this reduction loses an exponential (in the number of parties) factor in the security of the scheme. Additionally, as noted in [21], their scheme can be shown to be adaptively secure if the witness encryption scheme is extractable.Footnote 6 The latter is a somewhat controversial assumption that we prefer to avoid.

Our Results. We analyze the adaptive security of Yao’s scheme under our framework and show that in some cases the security loss is much smaller than \(2^n\). Roughly, we show that if the access structure can be described by a monotone circuit of depth d and s gates (with unbounded fan-in and fan-out) the security loss is proportional to \(s^{O(d)}\). Thus, for shallow circuits our analysis shows that an exponential loss is avoidable.

To exemplify the usefulness of the result, consider, for instance, the directed st-connectivity access structure mentioned in Footnote 6. It is known that it can be computed by a monotone circuit of size \(O(n^3 \log n)\) and depth \(O(\log ^2 n)\), but its monotone formula and span-program complexity is \(2^{\varOmega (\log ^2 n)}\) [17, 24]. Thus, no efficient perfectly secure scheme is known, and our proof shows that Yao’s scheme for this access structure is secure based on the assumption that quasi-polynomially-secure one-way functions exist.

Yao’s Scheme. In this scheme, an access structure is described by a monotone circuit. The sharing procedure first labels the output wire of the circuit with the shared secret and then proceeds to assign labels to all wires of the circuit; in the end the label on each input wire is included in the share of the corresponding party. The procedure for assigning labels is recursive and in each step it labels the input wires of a gate g assuming its output wires are already labeled (recall that we assume unbounded fan-in and fan-out so there are many input and output wires). To do so, we first sample a fresh encryption key s for a symmetric-key encryption scheme. If the gate is an AND gate, then we label each input wire with a random string conditioned on their XOR being s, and if the gate is an OR gate, then we label each input wire with s. In either case, we encrypt the labels of the output wires under s and include these ciphertexts associated with the gate g as part of ever party’s share. The reconstruction of the scheme works by reversing the above procedure from the leaves to the root. This scheme is indeed efficient for access structures that have polynomial-size monotone circuits.

Security Proof. Our goal is to show that as long as an adversary controls an unqualified set, he cannot learn anything about the secret. We start by outlining the selective security proof (following the argument of [28]), where the adversary first commits to the “corrupted” set. The proof is via a series of hybrids in which we slowly replace the ciphertexts associated with various gates g with bogus ciphertexts. Once we do this for the output gate, the shares become independent of the secret which proves security. The gates for which we can replace the ciphertexts with bogus ones are the gates for which the adversary cannot compute the corresponding encryption key. Since the adversary controls an unqualified set, a sequence which eventually results with replacing the encryption of the root gate must exist. Since in every hybrid we “handle” one gate and never consider it again, the number of hybrids is at most the number of gates in the circuit.

The problem with lifting this proof to the adaptive case is that it seems inherent to know the corrupted set of parties in order to know for which gates g to switch the ciphertexts from real to bogus (and in what order). However, in the adaptive game this set is not known during the sharing procedure. A naïve use of random guessing would result in an exponential security loss \(2^n\), where n is the number of parties.

To overcome this we associate each intermediate hybrid \( \mathsf{H}_i\) with a pebbling configuration in which each gate in the circuit is either pebbled (ciphertexts are bogus) or unpebbled (ciphertexts are real). The pebbling rules are:

  1. 1.

    Can place or remove a pebble on any AND gate for which (at least) one input wire is either not corrupted or comes out of a gate with a pebble on it.

  2. 2.

    Can place or remove a pebble on any OR gate for which all of the incoming wires are either non-corrupted input wires or come out of gates all of which have pebbles on them.

The initial hybrid corresponds to the case in which all gates are unpebbled and the final hybrid corresponds to the case in which all gates are unpebbled except the root gate which has a pebble. Now, any pebbling strategy that takes us from the initial configuration to the final one, corresponds to a sequence of selective hybrids \( \mathsf{H}_i\). Furthermore, to prove indistinguishability of neighboring hybrids \( \mathsf{H}_i, \mathsf{H}_{i+1}\) we don’t need the adversary to commit to the entire set of corrupted parties ahead of time but it suffices if the adversary only commits to the pebble configuration in steps i and \(i+1\). Therefore, if the pebbling strategy has the property that each configuration requires few bits to describe, then we would be able to use our framework. We show that for every corrupted set and any monotone circuit of depth d and s gates, there exists such a pebbling strategy, where the number of moves is roughly \(2^{O(d)}\) and each configuration has a very succinct representation: roughly \(d\cdot \log s\) bits. Plugging this into our framework, we get a proof of adaptive security with security loss proportional to \(s^{O(d)}\). We refer to Sect. 4 for the precise details.

1.2 Generalized Selective Decryption

Generalized Selective Decryption (GSD), introduced by Panjwani [23], is a game that captures the difficulty of proving adaptive security of certain protocols, most notably the Logical Key Hierarchy (LKH) multicast encryption protocol. On a high level, it deals with scenario where we have many secret keys \(k_i\) and various ciphertexts encrypting one key under another (but no cycles). We will discuss this problem in depth in the full version [15], here giving a high level overview on how our framework applies to this problem.

Let \((\mathsf{{Enc}},\mathsf{{Dec}})\) be a CPA-secure symmetric encryption scheme with (probabilistic) \(\mathsf{{Enc}}:\mathcal{K}\times \mathcal{M}\rightarrow \mathcal{C}\) and \(\mathsf{{Dec}}:\mathcal{K}\times \mathcal{C}\rightarrow \mathcal{M}\). We assume \(\mathcal{K}\subseteq \mathcal{M}\), i.e., we can encrypt keys. In the game, the challenger—either \(\mathsf{G}_\mathsf{L}\) or \(\mathsf{G}_\mathsf{R}\)—picks \(n+1\) random keys \(k_0,\ldots ,k_n\in \mathcal{K}\), and the adversary \({\mathsf{A}}\) is then allowed to make three types of queries:Footnote 7

  • Encryption query: on input \(({\mathtt {encrypt}},i,j)\) receives \(\mathsf{{Enc}}(k_i,k_j)\).

  • Corruption queries: on input \(({\mathtt {corrupt}},i)\) receives \(k_i\).

  • Challenge query, only one is allowed: on input \(({\mathtt {challenge}},i)\) receives \(k_i\) in the real game \(\mathsf{G}_\mathsf{L}\), and a random value in the random game \(\mathsf{G}_\mathsf{R}\).

We think of this game as generating a directed graph, with vertex set \(\mathcal{V}=\{0,\ldots ,n\}\), where every \(({\mathtt {encrypt}},i,j)\) query adds a directed edge (ij), and we say a vertex \(v_i\) is corrupted if a query \(({\mathtt {corrupt}},i)\) was made, or \(v_i\) can be reached from a corrupted vertex. The goal of the adversary is to distinguish the games \(\mathsf{G}_\mathsf{L}\) or \(\mathsf{G}_\mathsf{R}\), with the restriction that the constructed graph has no cycles, and the challenge vertex is a sink. To prove security, i.e., reduce the indistinguishability of \(\mathsf{G}_\mathsf{L}\) or \(\mathsf{G}_\mathsf{R}\) to the security of \(\mathsf{{Enc}}\), we can consider a selectivized version of this game where \({\mathsf{A}}\) must commit to the graph as described above (which uses \({<}n^2\) bits). The security of this selectivized game can then be reduced to the security of \(\mathsf{{Enc}}\) by a series of \({<}n^2\) hybrids, where a distinguisher for any two consecutive hybrids can be used to break the security of \(\mathsf{{Enc}}\) with the same advantage. Using random guessing followed by a hybrid argument we conclude that if \(\mathsf{{Enc}}\) is \(\delta \)-secure, the GSD game is \(\delta \cdot n^2 \cdot 2^{n^2}\)-secure. Thus, we lose an exponential in \(n^2\) factor in the reduction.

Fortunately, if we look at the actual protocols that GSD is supposed to capture, it turns out that the graphs that \({\mathsf{A}}\) can generate are not totally arbitrary. Two interesting cases are given by GSD restricted to graphs of bounded depth, and to trees. For these cases better reductions exist. Panjwani [23] shows that if the adversary is restricted to play the game such that the resulting graph is of depth at most d, a reduction losing a factor \((2n)^d\) exists. Moreover, Fuchsbauer et al. [8] give a reduction losing a factor \(n^{3\log n}\) when the underlying graph is a tree. In the full version we prove these results in our framework. Our proofs are much simpler than the original ones, especially than the proof of [23] which is very long and technical. This is thanks to our modular approach, where our general framework takes care of delicate probabilistic arguments, and basically just leaves us with the task of designing pebbling strategies, where each pebbling configuration has a succinct description, for various graphs, which is a clean combinatorial problem. The generic connection between adaptive security proofs of the GSD problem and graph pebbling is entirely new to this work.

GSD on a Path. Let us sketch the proof idea for the [8] result, but for an even more restricted case where the graph is a path visiting every node exactly once. In other words there is a permutation \(\sigma \) over \(\{0,\ldots ,n\}\) and the adversary’s queries are of the form \(({\mathtt {encrypt}},\sigma (i-1),\sigma (i))\) and \(({\mathtt {challenge}},\sigma (n))\). We first consider the selective game where \({\mathsf{A}}\) must commit to this permutation \(\sigma \) ahead of time. Let \( \mathsf{H}_\mathsf{L}, \mathsf{H}_\mathsf{R}\) be the selectivized versions of \(\mathsf{G}_\mathsf{L}\), \(\mathsf{G}_\mathsf{R}\) respectively.

To prove selective security, we can define a sequence of hybrid games \( \mathsf{H}_\mathsf{L}= \mathsf{H}_0,\ldots , \mathsf{H}_{\ell } = \mathsf{H}_\mathsf{R}\). Each hybrid is defined by a path, \(0\rightarrow 1\rightarrow \ldots \rightarrow n\), with a subset of the edges holding a black pebble. In the hybrid games, a pebble on \((i,i+1)\) means that instead of answering the query \(({\mathtt {encrypt}},\sigma (i),\sigma (i+1))\) with the “real” answer \(\mathsf{{Enc}}(k_{\sigma (i)},k_{\sigma (i+1)})\), we answer it with a “fake” answer \(\mathsf{{Enc}}(k_{\sigma (i)},r)\) for a random r. The goal is to move from a hybrid with no pebbles (this corresponds to \( \mathsf{H}_\mathsf{L}\)) to one with a single black pebble on the “sink” edge \((n-1, n)\) (this corresponds to \( \mathsf{H}_\mathsf{R}\)). We can prove that neighboring hybrids are indistinguishable via a reduction from CPA-security as long as the pebbling configurations are only modified via the following legal moves:

  1. 1.

    We can put/remove a pebble on the source edge (0, 1) at any time.

  2. 2.

    We can put/remove a pebble on an edge \((i,i+1)\) if the preceding edge \((i-1,i)\) has a pebble.

This is because adding/removing a pebble \((i,i+1)\) means changing what we encrypt under key \(k_{\sigma (i)}\) and therefore we need to make sure that either the edge is a source edge or there is already a pebble on the preceding edge to ensure that the key \(k_{\sigma (i)}\) is never being encrypted under some other key.

The simplest “basic pebbling strategy” consists of 2n moves where we add pebbles on the path \(0\rightarrow 1\rightarrow \ldots \rightarrow n\), one by one starting on the left and then remove one by one starting on the right, keeping only the pebble on the sink edge \((n-1, n)\). This is illustrated in Fig. 1(a) for \(n=8\). The strategy uses n pebbles. However, there are other pebbling strategies that allow us to trade off more moves for fewer pebbles. For example there is a “recursive strategy” (recursively pebble the middle vertex, then recursively pebble the right-most vertex, then recursively remove the pebble from the middle vertex) that uses at most \(\log n+1\) pebbles (instead of n), but requires \(3^{\log n}+1\) moves (instead of just 2n). This is illustrated in Fig. 1(b).

Fig. 1.
figure 1

“Classical” hybrid argument vs. improved hybrid argument. In both diagrams, the edges that carry a pebble are faked. (a) Illustration of the classical hybrids \( \mathsf{H}_0,\ldots , \mathsf{H}_{15}\) for GSD on a path graph with \(n=8\) edges: the number of hybrids is \(2n=16\), and the number of fake edges is at most n. (b) A sequence of hybrids \(\tilde{\mathsf{H}}_0,\ldots ,\tilde{\mathsf{H}}_{27}\) that use fewer fake edges: even though the number of hybrids is \(3^{\log {n}}+1=28\), the number of fake edges is at most \(\log {n}+1=4\). The argument on the right is identical to the one using nested hybrids in [8].

As we described, each pebbling strategy with \(\ell \) moves gives us a sequence of hybrids \( \mathsf{H}_\mathsf{L}= \mathsf{H}_0,\ldots , \mathsf{H}_{\ell } = \mathsf{H}_\mathsf{R}\) that allows us to prove selective security. Furthermore, we can prove relatively easily that neighboring hybrids \( \mathsf{H}_j, \mathsf{H}_{j+1}\) are indistinguishable even if the adversary doesn’t commit to the entire permutation \(\sigma \) but only to the value \(\sigma (i)\) of vertices i where either \( \mathsf{H}_{j}\) or \( \mathsf{H}_{j+1}\) has a pebble on the edge \((i-1,i)\). Using our framework, we therefore get a proof of adaptive security where the security loss is \(\ell \cdot n^p\) where p is the maximum number of pebbles used and \(\ell \) is the number of pebbling moves. In particular, if we use the recursive pebbling strategy described above we only suffer a quasipolynomial security loss \(3^{\log n}\cdot n^{\log n+1}\), as compared with \(2n\cdot (n+1)!\) for naïve random guessing where the adversary commits to the entire permutation \(\sigma \).

GSD on Low Depth and Other Families of Graphs. The proof outline for GSD on paths is just a very special case of our general result for GSD for various classes of graphs, which we discuss in the full version. If we consider a class of graphs which can be pebbled using \(\ell \) pebbling configurations, each containing at most q pebbles, we get a reduction showing that GSD for this class is \(\delta \cdot \ell \cdot 2^q\) secure, assuming the underlying \(\mathsf{{Enc}}\) scheme is \(\delta \)-secure.

Unfortunately, this approach will not gain us much for graphs with high in-degree: we can only put a pebble on an edge (ij) if all the edges \((*,i)\) going into node i are pebbled. So if we consider graphs which can have large in-degree d, any pebbling strategy must at some point have pebbled all the parents of i, and thus we’ll lose at least a factor \(2^d\) in the reduction. But remember that to apply our Theorem 2, we just need to be able to “compress” the information required to simulate the hybrids. So even if the hybrids correspond to configurations with many pebbles, that is fine as long as we can generate a short hint which will allow to emulate it (we use the same idea in the proof of adaptive security of the secret sharing scheme for monotone circuits with large fan-in).

Consider the selective GSD game, where the adversary commits to all of its queries, we can think of this as a DAG, where each edge comes with an index indicating in which query this node was added. Assume the adversary is restricted to choose DAGs of depth l (but no bound on the in-degree). One can show that there exists a pebbling sequence (of length \((2n)^l\)), such that in any pebbling configuration, all pebbles lie on a path from a sink to a root (which is of length at most l), and on edges going into this path. Moreover, we can ensure that in any configuration the following holds: if for a node j on this path, there is a pebble on edge (ij) with index t, then all edges of the form \((*,j)\) with index \({<}t\) must also have a pebble.

To describe such a configuration, we will output the \({\le }l\) nodes on the path, specify for every edge on this path if it is pebbled, and for any node j on the path, the number of edges going into j that have a pebble (note that there are at most \(2^ln^{2l}\) choices for this hint). The hint is sufficient to emulate a hybrid, as for any query \(({\mathtt {encrypt}},i,j)\) the adversary makes, we will know if the corresponding edge has a pebble or not. This is clear if the edge (ij) is on the path, as we know this path in full. But also for the other edges that can hold a pebble, where j is on the path but i is not. The reason is that we just have to count which query of the form \((*,j)\) this is, as we got a number c telling us that the first c such edges will have a pebble.

Applying Theorem 2, we recover Panjwani’s result [23] showing that if the GSD game restricted to graphs of depth l only loses a factor \(n^{O(l)}\) in the reduction.

1.3 Yao’s Garbled Circuits

Garbled circuits, introduced by Yao in (oral presentations of) [29, 30], can be used to garble a circuit C and an input x in a way that reveals C(x) but hides everything else. More precisely, a garbling scheme has three procedures; one to garble the circuit C and produce a garbled circuit \(\widetilde{C}\), one to garble the input x and produce a garbled input \(\widetilde{x}\), and one that evaluates the garbled circuit \(\widetilde{C}\) on the garbled input \(\widetilde{x}\) to get C(x). Furthermore, to prove security, there must be a simulator that only gets the output of the computation C(x) and can simulate the garbled circuit \(\widetilde{C}\) and input \(\widetilde{x}\), such that no PPT adversary can distinguish them from the real garbling.

Adaptive vs. Selective Security. In the adaptive setting, the adversary \({\mathsf{A}}\) first chooses the circuit C and gets back the garbled circuit \(\widetilde{C}\), then chooses the input x, and gets back garbled input \(\widetilde{x}\). The adversary’s goal is to decide whether he was interacting with the real garbling scheme or the simulator. In the selective setting, the adversary has to choose the circuit C as well as the input x at the very beginning and only then gets back \(\widetilde{C}, \widetilde{x}\).

Prior Work. The work of Bellare et al. [2] raised the question of whether Yao’s construction or indeed any construction of garbled circuits achieves adaptive security. The work of Hemenway et al. [12] gave the first construction of non-trivial adaptively secure garbled circuits based on one-way functions, by modifying Yao’s construction with an added layer of encryption having some special properties. Most recently, the work of Jafargholi and Wichs [16] gives the first analysis of adaptive security for Yao’s unmodified garbled circuit construction which significantly improves on the parameters of trivial random guessing. See [16] for a more comprehensive introduction and broader background on garbled circuits and adaptive security.

Here, we present the work of [16] as a special case of our general framework. Indeed, the work of [16] already implicitly follows our general framework fairly closely and therefore we only give a high level overview of how it fits into it.

Selective Hybrids. We start by outlining the selective security proof for Yao’s garbled circuits, following the presentation of [12, 16] which is in turn based on the proof of Lindell and Pinkas [22]. Essentially the proof proceeds via series of hybrids which modify one garbled gate at a time from the Real distribution to a Simulated one. However, this cannot be done directly in one step and instead requires going through an intermediate distribution called InputDep (we explain the name later). There are important restrictions on the order in which these steps can be taken:

  1. 1.

    We can switch a gate from Real to InputDep (and vice versa) if it is at the input level or if its predecessor gates are already InputDep.

  2. 2.

    We can switch a gate from InputDep to Simulated (and vice versa) if it is at the output level or if its successor gates are already Simulated.

The simplest strategy to switch all gates from Real to Simulated is to start with the input level and go up one level at a time switching all gates to InputDep. Then start with the output level and go down one level at a time switching all gates to Simulated. This corresponds to the basic proof of selective security of Yao garbled circuits.

However, the above is not the only possibility. In particular, any strategy for switching all gates from Real to Simulated following rules (1) and (2) corresponds to a sequence of hybrid games for proving selective security. We can identify the above with a pebbling game where one can place pebbles on the gates of the circuit. The Real distribution corresponds to not having a pebble and there are two types of pebbles corresponding to the InputDep and Simulated distributions. The goal is to start with no pebbles and finish by placing a Simulated pebble on every gate in the circuit while only performing legal moves according to rules (1) and (2) above. Every pebbling strategy gives rise to a sequence of hybrid games \(\mathsf{H}_0, \mathsf{H}_1, \ldots , \mathsf{H}_\ell \) for proving selective security, where the number of hybrids \(\ell \) corresponds to the number of moves and each hybrid \(\mathsf{H}_i\) is defined by the configuration of pebbles after i moves.

From Selective to Adaptive. The problem with translating selective security proofs into the adaptive setting lies with the InputDep distribution of a gate. This distribution depends on the input x (hence the name) and, in the adaptive setting, the input x that the adversary will choose is not yet known at the time when the garbled circuit is created. To be more precise, the InputDep distribution of a gate i only depends on the 1-bit value going over the output wire of that gate during the computation C(x). Moreover, if we take any two fixed hybrid games \(\mathsf{H}_i, \mathsf{H}_{i+1}\) corresponding to two neighboring pebble configurations (ones which differ by a single move) we can prove indistinguishability even if the adversary does not commit to the entire n-bit input x ahead of time but only commits to the bits going over the output wires of all gates i that are in InputDep mode in either configuration. This means that as long as the pebbling strategy only uses m pebbles of the InputDep type at any point in time, each pair of hybrids \(\mathsf{H}_i, \mathsf{H}_{i+1}\) can proved indistinguishable in a partially selective setting where the adversary only commits to m bits of information about his input ahead of time, rather than committing to the entire n bit input x. Using our framework, this shows that whenever there is a pebbling strategy for the circuit C that requires \(\ell \) moves and uses at most m pebbles of the InputDep type, we can translate the selective hybrids into a proof of adaptive security where the security loss is \(\ell \cdot 2^m\).

It turns out that for any graph of depth d there is a pebbling strategy that uses O(d) pebbles and \(\ell = 2^{O(d)}\) moves, meaning that we can prove adaptive security with a \(2^{O(d)}\) security loss. This leads to a proof of adaptive security for \(\mathsf {NC}^1\) circuits where the reduction has only polynomial security loss, but more generally we can often get a much smaller security loss than the trivial \(2^n\) bound achieved by naïve random guessing.Footnote 8

1.4 Constrained Pseudorandom Functions

Goldreich et al. [11] introduced the notion of a pseudorandom function (PRF). A PRF is an efficiently computable keyed function \(\mathsf{F}:\mathcal{K}\times \mathcal{X}\rightarrow \mathcal{Y}\), where \(\mathsf{F}(k,\cdot )\), instantiated with a random key \(k\leftarrow \mathcal{K}\), cannot be distinguished from a function randomly chosen from the set of all functions \(\mathcal{X}\rightarrow \mathcal{Y}\) with non-negligible probability. More recently, the notion of constrained pseudorandom functions (CPRF) was introduced as an extension of PRFs, by Boneh and Waters [5], Boyle et al. [6] and Kiayias et al. [19], independently. Informally, a constrained PRF allows the holder of a master key to derive keys which are constrained to a set, in the sense that such a key can be used to evaluate the PRF on that set, while the outputs on inputs outside of this set remain indistinguishable from random.

Goldreich et al., in addition to formally defining PRFs, gave a construction of a PRF from any length doubling pseudorandom generator (PRG). Their construction is depicted in Fig. 2. All three of the aforementioned results [5, 6, 19] show that this GGM construction already gives a so-called “prefix-constrained” PRF, which is a CPRF where for any \(x\in \{0,1\}^*\), one can give out keys which allow to evaluate the PRF on all inputs whose prefix is x. This is a simple but already very interesting class of CPRFs as it can be used to construct a punctured PRF, which in turn is a major tool in constructing various sophisticated primitives based on indistinguishability obfuscation (see, for example, [5, 13, 26]).

Fig. 2.
figure 2

Illustration of the GGM PRF. Every left child \(k_{x\Vert 0}\) of a node \(k_{x}\) is defined as the first half of \(\mathsf{PRG}(k_{x})\), the right child \(k_{x\Vert 1}\) as the second half. The circled node corresponds to \(\mathsf{GGM}(k_\emptyset ,010)\).

Prior Work. To show that the GGM construction is a prefix-constrained PRF one must show how to transform an adversary that breaks GGM as a prefix-constrained PRF into a distinguisher for the underlying PRG. The proofs in [5, 6, 19] only show selective security, where the adversary must initially commit to the output he wants to be challenged on in the security game. There is a loss in tightness by a factor of 2n. This can then be turned into a proof against adaptive adversaries via random guessing, losing an additional exponential factor \(2^n\) in the input length n.

Fuchsbauer et al. [9] showed that it is possible to achieve adaptive security by losing only factor of \((3q)^{\log n}\), where q denotes the number of queries made by the adversary—if q is polynomial, the loss is not exponential as before, but just quasi-polynomial. The bound relies on the so-called “nested hybrids” technique. Informally, the idea is to iterate random guessing and hybrid arguments several times. The random guessing is done in a way where one only has to guess some tiny amount of information, which although insufficient to get a full reduction using the hybrid argument, nevertheless reduces the complexity of the task significantly. Every such iteration “cuts” the domain in half, so after logarithmically many iterations the reduction is done. If the number of iterations is small, and the amount of information guessed in each iteration tiny, this can still lead to a reduction with much smaller loss than “single shot” random guessing.

Our Results. We cast the result in [9] in our framework, giving an arguably simpler and more intuitive proof. To this aim, we first describe the GGM construction and sketch its security proof.

Given a \(\mathsf{PRG}:\{0,1\}^m\rightarrow \{0,1\}^{2m}\), the PRF \(\mathsf{GGM}:\{0,1\}^m\times \{0,1\}^n\rightarrow \{0,1\}^m\) is defined recursively as

$$\begin{aligned} \mathsf{GGM}(k,x)=k_x\text { where }k_\emptyset =k\text { and }k_{x\Vert 0}\Vert k_{x\Vert 1}=\mathsf{PRG}(k_x). \end{aligned}$$

The construction is also a prefix-constrained PRF: given a key \(k_x\) for any \(x\in \{0,1\}^*\), one can evaluate \(\mathsf{GGM}(k,x')\) for all \(x'\) whose prefix is x.

The security of the \(\mathsf{GGM}\) as a PRF is given in [11]. In particular, they show that if an adversary exists who distinguishes \(\mathsf{GGM}(k,\cdot )\) (real experiment) from a uniformly random function (random experiment) with advantage \(\epsilon \) making q (adaptive) queries, then an adversary of roughly the same complexity exists who distinguishes \(\mathsf{PRG}(U_m)\) from \(U_{2m}\) with advantage \(\epsilon / nq\). Thus if we assume that \(\mathsf{PRG}\) is \(\delta \)-secure, then \(\mathsf{GGM}\) is \(\delta n q \)-secure against any q-query adversary of the same complexity. This is one of the earliest applications of the hybrid argument.

The security definition for CPRFs is quite different from that of standard PRFs: the adversary will get to query the CPRF \(\mathsf{F}(k,\cdot )\) in both, the real and random experiment (and can ask for constrained keys, not just regular outputs), and only at the very end the adversary will choose a challenge query \(x^*\), which is then answered with either the correct CPRF output \(\mathsf{F}(k,x^*\)) (in the real experiment) or a random value (in the random experiment). In the selective version of these security experiments, the adversary has to choose the challenge \(x^*\) before making any queries. In particular, for the case of prefix-constrained PRFs, the experiment is as follows. The challenger samples \(k\in \{0,1\}^n\) uniformly at random. The adversary \(\mathcal {A}\) first commits to some \(x^*\in \{0,1\}^n\). Then it can make constrain queries \(x\in \{0,1\}^*\) for any x which is not a prefix of \(x^*\), and receives the constrained key \(k_x\) in return. Finally, \(\mathcal {A}\) gets either \(\mathsf{GGM}(k,x^*)\) (in the real game) or a random value, and must guess which is the case.

Selective Hybrids. A naïve sequence of selective hybrids, which is of length 2n, relies just on the knowledge of \(x^*\). For \(n=8\) the corresponding 16 hybrid games are illustrated in Fig. 1a. Each path \(\text {C}(n)\) corresponds to a hybrid, and it “encodes” how the value of the function \(\mathsf{F}\) is computed on the challenge input \(x^*\) (and this determines how the function is computed on the rest of the inputs too). An edge that does not carry a pebble is computed, normally, as defined in \(\mathsf{GGM}\)—i.e., if the ith edge is not pebbled then \(k_{x^*[1,i-1]\Vert 0}\Vert k_{x^*[1,i-1]\Vert 1}\) is set to \(\mathsf{PRG}(k_{x[1,i-1]})\), where for \(x\in \{0,1\}^n\), x[1, i] denotes its i bit prefix. On the other hand, for an edge with a pebble, we replace the \(\mathsf{PRG}\) output with a random value—i.e., \(k_{x^*[1,i-1]\Vert 0}\Vert k_{x^*[1,i-1]\Vert 1}\) is set to a uniformly random string in \(\{0,1\}^{2m}\). It’s not hard to see that any distinguisher for two consecutive hybrids can be directly used to break the \(\mathsf{PRG}\) with the same advantage by embedding the \(\mathsf{PRG}\)-challenge – which is either \(U_{2m}\) or \(\mathsf{PRG}(U_m)\) – at the right place. Using random guessing we can get adaptive security losing an additional factor \(2^n\) in the distinguishing advantage by initially guessing \(x^*\in \{0,1\}^n\).

From Selective to Adaptive. Before we explain the improved reduction, we take a step back and consider an even more selective game where \({\mathsf{A}}\) must commit, in addition to the challenge query \(x_q=x^*\), also to the constrain queries \(\{x_1,\ldots ,x_{q-1}\}\). We can use the knowledge of \(x_1,\ldots ,x_{q-1}\) to get a better sequence of hybrids: this requires two tricks. First, as in GSD on a path, instead of using the pebbling strategy in Fig. 1a, we switch to the recursive pebbling sequence in Fig. 1b. Second, we need a more concise “indexing” for the pebbles: unlike in the proof for GSD, here we can’t simply give the positions of the (up to \(\log n+1\)) pebbles as hint to simulate the hybrids, as the graph has exponential size, thus even the position of a single pebble would require as many bits to encode as the challenge \(x^*\). Instead, we assume there’s an upper bound q on the number of queries made by the adversary. For a pebble on the ith edge, we just give the index of the first constrain query whose i bit prefix coincides with \(x^*\), i.e., the minimum j such that \(x_j[1,i]=x^*[1,i]\). This information is sufficient to tell when exactly during the experiment we have to compute a value that corresponds to a pebbled edge.

As there are \(3^{\log n}\) hybrids, and each hint comes from a set of size \(q^{\log n}\) (i.e., a value \(\le q\) for every pebble), our Theorem 2 implies that \(\mathsf{GGM}\) is a \(\delta (3q)^{\log n}\) secure prefix-constrained PRF if \(\mathsf{PRG}\) is \(\delta \) secure. Details are given in the full version [15].

2 Notation

Throughout, we use \(\lambda \) to denote the security parameter. We use capital letters like X to denote variables, small letters like x to denote concrete values, calligraphic letters like \(\mathcal X\) to denote sets and sans-serif letters like \(\mathsf X\) to denote algorithms. Our algorithms can all be modelled as (potentially interactive, probabilistic, polynomial time) Turing machines. With \(\mathsf{X}\equiv \mathsf{Y}\) we denote that \(\mathsf{X}\) has exactly the same input/output distribution as \(\mathsf{Y}\), and \(X\sim Y\) denotes that X and Y have the same distributions. \(U_\mathcal{X}\) denotes the uniform distribution over \(\mathcal{X}\). In particular, \(U_n\) denotes the uniform distribution over \(\{0,1\}^n\). For a set \(\mathcal{X}\), \(s_\mathcal{X}\) denotes the complexity of sampling uniformly at random from \(\mathcal{X}\). For \(a,b\in \mathbb {N}\), \(a\ge b\), by [ab] we denote the set \(\{a,a+1,\ldots ,b\}\). For \(x\in \{0,1\}^n\) we’ll denote with x[1, i] its i bit prefix.

3 The Framework

We consider a game described via a challenger \(\mathsf{G}\) which interacts with an adversary \({\mathsf{A}}\). At the end of the interaction, \(\mathsf{G}\) outputs a decision bit b and we let \(\langle {\mathsf{A}},\mathsf{G}\rangle \) denote the random variable corresponding to that bit.

Definition 1

We say that two games defined via challengers \(\mathsf{G}_0\) and \(\mathsf{G}_1\) are \((s,\varepsilon )\)-indistinguishable if for any adversary \({\mathsf{A}}\) of size at most s:

$$\begin{aligned} |\Pr [\langle {\mathsf{A}},\mathsf{G}_0\rangle =1] - \Pr [\langle {\mathsf{A}},\mathsf{G}_1\rangle =1] | \le \varepsilon . \end{aligned}$$

We say that two games are perfectly indistinguishable and write \(\mathsf{G}_0 \equiv \mathsf{G}_1\) if they are \((\infty , 0)\)-indistinguishable.

Selectivized Games. We define two operations that convert adaptive or partially selective games into further selective games.

Definition 2

(Selectivized Game). Given an (adaptive) game \(\mathsf{G}\) and some function \(g:\{0,1\}^* \rightarrow \mathcal{W}\) we define the selectivized game \( \mathsf{H}= \mathsf{SEL}_\mathcal{W}[\mathsf{G}, g]\) which works as follows. The adversary \({\mathsf{A}}\) first sends a commitment \(w \in \mathcal{W}\) to \( \mathsf{H}\). Then \( \mathsf{H}\) runs the challenger \(\mathsf{G}\) against \({\mathsf{A}}\), at the end of which \(\mathsf{G}\) outputs a bit \(b'\). Let \(\mathsf{transcript}\) denote all communication exchanged between \(\mathsf{G}\) and \({\mathsf{A}}\). If \(g(\mathsf{transcript}) = w\) then \( \mathsf{H}\) outputs the bit \(b'\) and else it outputs 0. See Fig. 3(a).

Note that the selectivized game gets a commitment w from the adversary but essentially ignores it during the rest of the game. Only, at the very end of the game, it checks that the commitment matches what actually happened during the game.

Definition 3

(Further Selectivized Game). Assume \(\mathsf{\hat{H}}\) is a (partially selective) game which expects to receive some commitment \(u\in \mathcal{U}\) from the adversary in the first round. Given functions \(g:\{0,1\}^* \rightarrow \mathcal{W}\) and \(h:\mathcal{W}\rightarrow \mathcal{U}\) we define the further selectivized game \( \mathsf{H}= \mathsf{SEL}_{\mathcal{U}\rightarrow \mathcal{W}}[\mathsf{\hat{H}}, g, h]\) as follows. The adversary \({\mathsf{A}}\) first sends a commitment \(w \in \mathcal{W}\) to \( \mathsf{H}\) and \( \mathsf{H}\) begins running \(\mathsf{\hat{H}}\) and passes it \(u= h(w)\). It then continues running the game between \(\mathsf{\hat{H}}\) and \({\mathsf{A}}\) at the end of which \(\mathsf{\hat{H}}\) outputs a bit \(b'\). Let \(\mathsf{transcript}\) denote all communication exchanged between \(\mathsf{\hat{H}}\) and \({\mathsf{A}}\). If \(g(\mathsf{transcript}) = w\) then \( \mathsf{H}\) outputs the bit \(b'\) and else it outputs 0. See Fig. 3(b).

Note that if \(\mathsf{\hat{H}}\) is a (partially selective) game where the adversary sends some commitment \(u\), then in the further selectivized game the adversary might have to commit to more information w. The further selectivized game essentially ignores w and only relies on the partial information \(u= h(w)\) during the course of the game but only at the very end is still checks that the full commitment w matches what actually happened during the game.

Fig. 3.
figure 3

Selectivizing. (a): \(\mathsf{SEL}_\mathcal{W}[\mathsf{G}, g]\), and (b): \(\mathsf{SEL}_{\mathcal{U}\rightarrow \mathcal{W}}[\mathsf{\hat{H}}, g, h]\). The symbol \(\tau \) is short for transcript, the nodes with \(g\) and \(h\) compute the respective functions, whereas the node with \(=\) outputs a bit b as prescribed in the consistency check.

Random Guessing. We first present the basic reduction using random guessing.

Lemma 1

Assume we have two games defined via challengers \(\mathsf{G}_0\) and \(\mathsf{G}_1\) respectively. Let \(g:\{0,1\}^* \rightarrow \mathcal{W}\) be an arbitrary function and define the selectivized games \( \mathsf{H}_b = \mathsf{SEL}_\mathcal{W}[\mathsf{G}_b,g]\) for \(b\in \{0,1\}\). If \( \mathsf{H}_0\), \( \mathsf{H}_1\) are \((s, \varepsilon )\)-indistinguishable then \(\mathsf{G}_0\), \(\mathsf{G}_1\) are \((s - s_\mathcal{W}, \varepsilon \cdot |\mathcal{W}|)\)-indistinguishable, where \(s_\mathcal{W}\) denotes the complexity of sampling uniformly at random from \(\mathcal{W}\).

Proof

We prove the contrapositive. Assume that there is an adversary \({\mathsf{A}}\) of size \(s'=s-s_\mathcal{W}\) such that

$$\begin{aligned} \left| \Pr [\langle {\mathsf{A}},\mathsf{G}_0\rangle =1]-\Pr [\langle {\mathsf{A}},\mathsf{G}_1\rangle =1]\right| > \varepsilon \cdot |\mathcal{W}|. \end{aligned}$$

Let \({\mathsf{A}}^*\) be the adversary that first chooses a uniformly random \(w \leftarrow \mathcal{W}\) and then runs \({\mathsf{A}}\). Then for \(b \in \{0,1\}\):

$$\begin{aligned} \Pr [\langle {\mathsf{A}}^*, \mathsf{H}_b\rangle =1] = \Pr [\langle {\mathsf{A}},\mathsf{G}_b\rangle =1]/|\mathcal{W}| \end{aligned}$$

and therefore

$$\begin{aligned} \left| \Pr [\langle {\mathsf{A}}^*, \mathsf{H}_0\rangle =1] - \Pr [\langle {\mathsf{A}}^*, \mathsf{H}_1\rangle =1]\right| > \varepsilon . \end{aligned}$$

Moreover, since \({\mathsf{A}}^*\) is of size \(s' + s_\mathcal{W}= s\) this shows that \( \mathsf{H}_0\) and \( \mathsf{H}_1\) are not \((s, \varepsilon )\)-indistinguishable.

Partially Selective Hybrids. Consider the following setup. We have two adaptive games \(\mathsf{G}_\mathsf{L}\) and \(\mathsf{G}_\mathsf{R}\). For some function \(g:\{0,1\}^* \rightarrow \mathcal{W}\) we define the selectivized games \( \mathsf{H}_\mathsf{L}= \mathsf{SEL}_\mathcal{W}[\mathsf{G}_\mathsf{L}, g]\), \( \mathsf{H}_\mathsf{R}= \mathsf{SEL}_\mathcal{W}[\mathsf{G}_\mathsf{R}, g]\) where the adversary commits to some information \(w \in \mathcal{W}\). Moreover, to show the indisitinguishability of \( \mathsf{H}_\mathsf{L}, \mathsf{H}_\mathsf{R}\) we have a sequence of \(\ell \) (selective) hybrid games \( \mathsf{H}_\mathsf{L}= \mathsf{H}_0, \mathsf{H}_1,\ldots , \mathsf{H}_\ell = \mathsf{H}_\mathsf{R}\).

If we only assume that neighboring hybrids \( \mathsf{H}_{i}, \mathsf{H}_{i+1}\) are indistinguishable then by combining the hybrid argument and random guessing we know that \(\mathsf{G}_\mathsf{L}\) and \(\mathsf{G}_\mathsf{R}\) are indistinguishable at a security loss of \(\ell \cdot |\mathcal{W}|\).

Theorem 1

Assume that for each \(i \in \{0,\ldots ,\ell -1\}\), the games \( \mathsf{H}_{i}, \mathsf{H}_{i+1}\) are \((s, \varepsilon )\)-indistinguishable. Then \(\mathsf{G}_\mathsf{L}\) and \(\mathsf{G}_\mathsf{R}\) are \((s - s_\mathcal{W}, \varepsilon \cdot \ell \cdot |\mathcal{W}|)\)-indistinguishable, where \(s_\mathcal{W}\) denotes the complexity of sampling uniformly at random from \(\mathcal{W}\).

Proof

Follows from Lemma 1 and the hybrid argument.

Our goal is to avoid the loss of \(|\mathcal{W}|\) in the above theorem. To achieve this, we will assume a stronger condition: not only are neighboring hybrids \( \mathsf{H}_{i}, \mathsf{H}_{i+1}\) indistinguishable, but they are selectivized versions of less selective games \(\mathsf{\hat{H}}_{i,0}, \mathsf{\hat{H}}_{i,1}\) which are already indistinguishable. In particular, we assume that for each pair of neighboring hybrids \( \mathsf{H}_{i}, \mathsf{H}_{i+1}\) there exist some less selective hybrids \(\mathsf{\hat{H}}_{i,0}, \mathsf{\hat{H}}_{i,1}\) where the adversary only commits to much less information \(h_i(w) \in \mathcal{U}\) instead of \(w \in \mathcal{W}\). In more detail, for each i there is some function \(h_i:\mathcal{W}\rightarrow \mathcal{U}\) that lets us interpret \( \mathsf{H}_{i+b}\) as a selectivized version of \(\mathsf{\hat{H}}_{i,b}\) via \( \mathsf{H}_{i+b} \equiv \mathsf{SEL}_{\mathcal{U}\rightarrow \mathcal{W}}[\mathsf{\hat{H}}_{i,b}, g, h_i]\). In that case, the next theorem shows that we only get a security loss proportional to \(|\mathcal{U}|\) rather than \(|\mathcal{W}|\). Note that different pairs of “less selective hybrids” \(\mathsf{\hat{H}}_{i,0}, \mathsf{\hat{H}}_{i,1}\) rely on completely different partial information \(h_i(w)\) about the adversary’s choices. Moreover, the “less selective” hybrid that we associate with each \( \mathsf{H}_i\) can be different when we compare \( \mathsf{H}_{i-1}, \mathsf{H}_i\) (in which case it is \(\mathsf{\hat{H}}_{i-1,1}\)) and when we compare \( \mathsf{H}_i\) and \( \mathsf{H}_{i+1}\) (in which case it is \(\mathsf{\hat{H}}_{i,0}\)).

Theorem 2

(main). Let \(\mathsf{G}_\mathsf{L}\) and \(\mathsf{G}_\mathsf{R}\) be two adaptive games. For some function \(g:\{0,1\}^* \rightarrow \mathcal{W}\) we define the selectivized games \( \mathsf{H}_\mathsf{L}= \mathsf{SEL}_\mathcal{W}[\mathsf{G}_\mathsf{L}, g]\), \( \mathsf{H}_\mathsf{R}= \mathsf{SEL}_\mathcal{W}[\mathsf{G}_\mathsf{R}, g]\). Let \( \mathsf{H}_\mathsf{L}= \mathsf{H}_0, \mathsf{H}_1,\ldots , \mathsf{H}_\ell = \mathsf{H}_\mathsf{R}\) be some sequence of hybrid games.

Assume that for each \(i \in \{0,\ldots ,\ell -1\}\), there exists a function \(h_i:\mathcal{W}\rightarrow \mathcal{U}\) and games \(\mathsf{\hat{H}}_{i,0}, \mathsf{\hat{H}}_{i,1}\) such that:

$$\begin{aligned} \mathsf{H}_{i} \equiv \mathsf{SEL}_{\mathcal{U}\rightarrow \mathcal{W}}[\mathsf{\hat{H}}_{i,0}, g, h_i],~~~ \mathsf{H}_{i+1} \equiv \mathsf{SEL}_{\mathcal{U}\rightarrow \mathcal{W}}[\mathsf{\hat{H}}_{i,1}, g, h_i]. \end{aligned}$$
(1)

Furthermore, assume that \(\mathsf{\hat{H}}_{i,0}, \mathsf{\hat{H}}_{i,1}\) are \((s, \varepsilon )\)-indistinguishable. Then \(\mathsf{G}_\mathsf{L}\) and \(\mathsf{G}_\mathsf{R}\) are \((s - s_\mathcal{U}, \varepsilon \cdot \ell \cdot |\mathcal{U}|)\)-indistinguishable, where \(s_\mathcal{U}\) denotes the complexity of sampling uniformly at random from \(\mathcal{U}\).

Proof

Assume that \({\mathsf{A}}\) is an adaptive distinguisher for \(\mathsf{G}_\mathsf{L}\) and \(\mathsf{G}_\mathsf{R}\) of size \(s'\) such that

$$\begin{aligned} |\Pr [\langle {\mathsf{A}},\mathsf{G}_\mathsf{L}\rangle =1] - \Pr [\langle {\mathsf{A}},\mathsf{G}_\mathsf{R}\rangle =1] | > \varepsilon '. \end{aligned}$$

Let \({\mathsf{A}}^*\) be a fully selective distinguisher that guesses \(w \leftarrow \mathcal{W}\) uniformly at random in the first round and then runs \({\mathsf{A}}\). By the same argument as in Lemma 1 and Theorem 1 we know that there exists some \(i\in [0,\ell )\) such that:

$$\begin{aligned} |\Pr [\langle {\mathsf{A}}^*, \mathsf{H}_i\rangle =1] - \Pr [\langle {\mathsf{A}}^*, \mathsf{H}_{i+1}\rangle =1] | \ge \varepsilon ' /(\ell \cdot |\mathcal{W}|) \end{aligned}$$
(2)

Let \({\mathsf{A}}'\) be a partially selective distinguisher that guesses \(u\leftarrow \mathcal{U}\) uniformly at random in the first round and then runs \({\mathsf{A}}\). We want to relate the probabilities \(\Pr [\langle {\mathsf{A}}^*, \mathsf{H}_{i+b}\rangle =1]\) and \(\Pr [\langle {\mathsf{A}}',\mathsf{\hat{H}}_{i,b}\rangle =1]\).

Recall that the game \(\langle {\mathsf{A}}^*, \mathsf{H}_{i+b}\rangle \) consists of \({\mathsf{A}}^*\) selecting a uniformly random value \(w \leftarrow \mathcal{W}\) (which we denote by the random variable W) and then we run \({\mathsf{A}}\) against \(\mathsf{\hat{H}}_{i,b}(u)\) (denoting the challenger \(\mathsf{\hat{H}}_{i,b}\) that gets a commitment \(u\) in first round) which results in some \(\mathsf{transcript}\) and an output bit \(b^*\); if \(g(\mathsf{transcript}) = w\) the final output is \(b^*\) else 0.

Similarly, the game \(\langle {\mathsf{A}}',\mathsf{\hat{H}}_{i,b}\rangle \) consists of \({\mathsf{A}}'\) selecting a uniformly random value \(u\leftarrow \mathcal{U}\) (which we denote by the random variable U) and then we run \({\mathsf{A}}\) against \(\mathsf{\hat{H}}_{i,b}(u)\). Therefore:

$$\begin{aligned} \Pr [&\langle {\mathsf{A}}^*, \mathsf{H}_{i+b}\rangle =1] \\&=\sum _{u\in \mathcal{U}} \Pr [\underbrace{h_i(W)=u}_{\text {I}}]\cdot \Pr [\underbrace{\langle {\mathsf{A}},\mathsf{\hat{H}}_{i,b}(u)\rangle =1}_{\text{ II }}]\cdot \Pr [W= g(\mathsf{transcript})|\text {I},\text {II}] \\&=\sum _{u\in \mathcal{U}}\frac{|h_i^{-1}(u)|}{|\mathcal{W}|}\cdot \Pr [\langle {\mathsf{A}},\mathsf{\hat{H}}_{i,b}(u)\rangle =1]\cdot \frac{1}{|h_i^{-1}(u)|} \\&=\frac{1}{|\mathcal{W}|}\cdot \sum _{u\in \mathcal{U}}\Pr [\langle {\mathsf{A}},\mathsf{\hat{H}}_{i,b}(u)\rangle =1] \\&=\frac{|\mathcal{U}|}{|\mathcal{W}|}\cdot \sum _{u\in \mathcal{U}}\Pr [\langle {\mathsf{A}},\mathsf{\hat{H}}_{i,b}(u)\rangle =1]\cdot \Pr [U=u] \\&=\frac{|\mathcal{U}|}{|\mathcal{W}|}\cdot \Pr [\langle {\mathsf{A}}',\mathsf{\hat{H}}_{i,b}\rangle =1] \end{aligned}$$

Combining the above with Eq. 2 we get:

$$\begin{aligned} |\Pr [\langle {\mathsf{A}}',\mathsf{\hat{H}}_{i,0}\rangle =1] - \Pr [\langle {\mathsf{A}}',\mathsf{\hat{H}}_{i,1}\rangle =1] | \ge \varepsilon ' /(\ell \cdot |\mathcal{U}|) \end{aligned}$$

Since by assumption \(\mathsf{\hat{H}}_{i,0}, \mathsf{\hat{H}}_{i,1}\) are \((s, \varepsilon )\)-indistinguishable and \({\mathsf{A}}'\) is of size \(s' + s_\mathcal{U}\) this shows that when \(s' = s - s_\mathcal{U}\) then \(\varepsilon ' \le \varepsilon \cdot \ell \cdot |\mathcal{U}|\) which proves the theorem.

3.1 Example: GSD on a Path

As an example, we consider the problem of generalised selective decryption (GSD) on a path graph with n edges, where n is a power of two.

Let \((\mathsf{{Enc}},\mathsf{{Dec}})\) be a symmetric encryption scheme with (probabilistic) \(\mathsf{{Enc}}:\mathcal{K}\times \mathcal{M}\rightarrow \mathcal{C}\) and \(\mathsf{{Dec}}:\mathcal{K}\times \mathcal{C}\rightarrow \mathcal{M}\). We assume \(\mathcal{K}\subseteq \mathcal{M}\) so that we can encrypt keys, and that the encryption scheme is \((s,\delta )\)-indistinguishable under chosen-plaintext attack.Footnote 9 In the game, the challenger—either \(\mathsf{G}_\mathsf{L}\) or \(\mathsf{G}_\mathsf{R}\)—picks \(n+1\) random keys \(k_0,\ldots ,k_n\in \mathcal{K}\), and the adversary \({\mathsf{A}}\) is then allowed to make two types of queries:

  • Encryption queries, \(({\mathtt {encrypt}},v_i,v_j)\): it receives back \(\mathsf{{Enc}}(k_i,k_j)\).

  • Challenge query, \(({\mathtt {challenge}},v_{i^*})\): here the answer differs between \(\mathsf{G}_\mathsf{L}\) and \(\mathsf{G}_\mathsf{R}\), with \(\mathsf{G}_\mathsf{L}\) answering with \(k_{i^*}\) (real key) and \(\mathsf{G}_\mathsf{R}\) answering with \(r\leftarrow \mathcal{K}\) (random, “fake” key).

\({\mathsf{A}}\) cannot ask arbitrary queries: it is restricted to encryption queries that form a path graph with the challenge query as the sink. That is, a valid attacker \({\mathsf{A}}\) is allowed exactly n encryption queries \(({\mathtt {encrypt}},v_{i_t},v_{j_t})\), for \(t=1,\ldots ,n\), and a single \(({\mathtt {challenge}},v_{i^*})\) query such that the directed graph \(\text {G}_\kappa =(\mathcal{V},\mathcal{E})\) with \(\mathcal{V}=\{v_0,\ldots ,v_n\}\) and \(\mathcal{E}=\{(v_{i_1},v_{j_1}),\ldots ,(v_{i_n},v_{j_n})\}\) forms a path with sink \(v_{i^*}\).

Fully Selective Hybrids. Let’s look at a naïve sequence of intermediate hybrids \( \mathsf{H}_0,\ldots , \mathsf{H}_{2n-1}\). The fully selective challenger \( \mathsf{H}_I\) receives as commitment the exact permutation \(\sigma \) that \({\mathsf{A}}\) will query—i.e., \(v_{\sigma (i)}\) is the ith vertex on the path. Therefore, \(\mathcal{W}=S_{n+1}\) (the symmetry group over \(0,\ldots ,n\)) and \(g\) is the function that outputs the observed permutation from transcript. Next, \( \mathsf{H}_I\) samples \(2(n+1)\) keys \(k_0,\ldots ,k_n,r_0,\ldots ,r_n\), and when \({\mathsf{A}}\) makes a query \(({\mathtt {encrypt}},v_{\sigma (i)},v_{\sigma (i+1)})\), it returns

for \(0\le I\le n:\)

$$\begin{aligned} \begin{array}{clr} \mathsf{{Enc}}(k_{\sigma (i)},r_{\sigma (i+1)})\ &{} \mathbf{if}\ (0\le i\le I) &{}\ \text { (Fake edge)}\\ \mathsf{{Enc}}(k_{\sigma (i)},k_{\sigma (i+1)})\ &{} \mathbf{otherwise}. &{}\ \text { (Real edge)} \end{array} \end{aligned}$$

for \(n< I\le 2n-1:\)

$$\begin{aligned} \begin{array}{clr} \mathsf{{Enc}}(k_{\sigma (i)},r_{\sigma (i+1)})\ &{} \mathbf{if}\ (0\le i\le 2n-1-I)\vee (i=n-1) &{}\ \text { (Fake edge)}\\ \mathsf{{Enc}}(k_{\sigma (i)},k_{\sigma (i+1)})\ &{} \mathbf{otherwise}. &{}\ \text { (Real edge)} \end{array} \end{aligned}$$
(3)

Thus, in the sequence \( \mathsf{H}_0,\ldots , \mathsf{H}_{2n-1}\), edges are “faked” sequentially down the path, and then “restored”, except for the last edge, in the reverse order up the path—see Fig. 1a. By definition, \( \mathsf{H}_0=\mathsf{G}_\mathsf{L}\) and \( \mathsf{H}_{2n-1}=\mathsf{G}_\mathsf{R}\). Moreover, \( \mathsf{H}_I\) and \( \mathsf{H}_{I+1}\) can be shown \((s,\delta )\)-indistinguishable: when \({\mathsf{A}}\) queries for \(({\mathtt {encrypt}}, v_{\sigma (I)},v_{\sigma (I+1)})\), the reduction \(\mathsf{R}_I\) returns the challenge ciphertext

$$\begin{aligned} \begin{array}{clr} \mathsf {C}(\cdot ,k_{\sigma (I+1)},r_{\sigma (I+1)})\ &{} \mathbf{if}\ (I\le n) &{}\ \text { (Real to fake)}\\ \mathsf {C}(\cdot ,r_{\sigma (I+1)},k_{\sigma (I+1)})\ &{} \mathbf{otherwise}. &{}\ \text { (Fake to real)} \end{array} \end{aligned}$$
(4)

For the rest of the queries, \(\mathsf{R}_I\) works as prescribed in Eq. 3.Footnote 10 It is easy to see that \(\mathsf{R}_I\) simulates \( \mathsf{H}_I\) when the ciphertext corresponds to the first message, and \( \mathsf{H}_{I+1}\) otherwise. By Theorem 1, \((s-n\cdot s_\mathsf{{Enc}},\delta (2n+1) (n+1)!)\)-indistinguishability of \(\mathsf{G}_\mathsf{L}\) and \(\mathsf{G}_\mathsf{R}\) follows, where \(s_\mathsf{{Enc}}\) is the complexity of the \(\mathsf{{Enc}}\) algorithm and the \((n+1)!\) factor is the size of the set \(\mathcal{W}=S_{n+1}\).

Partially Selective Hybrids. In order to simulate according to the strategy just described, it suffices for the hybrid (as well as the reduction) to guess the edges that are faked—however, this number can be at most n (e.g., in the middle hybrids) and, therefore, the simulator guesses the whole path anyway. Intuitively, this is where the overall looseness of the bound stems from. Now, consider the alternative sequence of hybrids \(\tilde{\mathsf{H}}_0,\ldots ,\tilde{\mathsf{H}}_{27}\) given in Fig. 1b: the edges in this sequence are faked and restored, one at a time, in a recursive manner to ensure that at most four edges end up fake per hybrid. In particular, the new hybrid \(\tilde{\mathsf{H}}_I\), fakes all the edges that belong to a set \(\mathcal {P}_I\subseteq \mathcal{E}\). That is, when \({\mathsf{A}}\) makes a query \(({\mathtt {encrypt}},v_i,v_j)\)—instead of following Eq. 3,—\(\tilde{\mathsf{H}}_I\) returns

$$\begin{aligned} \begin{array}{clr} \mathsf{{Enc}}(k_i,r_j)\ &{} \mathbf{if}\ ((v_i,v_j)\in \mathcal {P}_I) &{}\ \text { (Fake edge)}\\ \mathsf{{Enc}}(k_i,k_j)\ &{} \mathbf{otherwise}. &{}\ \text { (Real edge)} \end{array} \end{aligned}$$
(5)

This strategy can be extended to arbitrary n, and there exists such a sequence of sets \(\mathcal {P}_0,\ldots ,\mathcal {P}_{3^{\log {n}}}\) where the sets are of size at most \(\log {n}+1\).Footnote 11

figure a

Next, we show that the above simulation strategy satisfies the requirements for applying Theorem 2. Firstly, as shown in Algorithm 1, the strategy is partially selective—i.e., \(\tilde{\mathsf{H}}_{I+b}=\mathsf{SEL}_{\mathcal{U}\rightarrow \mathcal{W}}[\mathsf{\hat{H}}_{I,b}, g, h_I]\), where, for \(I\in [0,\ell =3^{\log n}]\), the function \(h_I:S_{n+1}\rightarrow \mathcal{E}^{\log {n}+1}\) computes \(\mathcal {P}_I\). Secondly, as the simulation in \(\mathsf{\hat{H}}_{I,0}\) and \(\mathsf{\hat{H}}_{I,1}\) differ by exactly one edge—which is real in one and fake in the other—they can be shown to be \((s,\delta )\)-indistinguishable. To be precise, if \((v_{i^*},v_{j^*}):=\mathcal {P}_I\triangle \mathcal {P}_{I+1}\), where \(\triangle \) denotes the symmetric difference, when \({\mathsf{A}}\) queries for \(({\mathtt {encrypt}}, v_{i^*},v_{j^*})\), the reduction \(\tilde{\mathsf{R}}_I\) returns

$$\begin{aligned} \begin{array}{clr} \mathsf {C}(\cdot ,k_{j^*},r_{j^*})\ &{} \mathbf{if}\ (\mathcal {P}_I\subset \mathcal {P}_{I+1}) &{}\ \text { (Real to fake)}\\ \mathsf {C}(\cdot ,r_{j^*},k_{j^*})\ &{} \mathbf{otherwise}. &{}\ \text { (Fake to real)} \end{array} \end{aligned}$$
(6)

with the rest of the queries answered as in Eq. 5.

Although, the number of hybrids is greater than in the previous sequence, the number of fake edges in any hybrid is at most \(\log {n}+1\). Thus, the reduction can work with less information than earlier. By Theorem 2, \((s-n\cdot s_{\mathsf{{Enc}}}-s_{\mathcal {P}},\delta \cdot 3^{\log {n}}\cdot n^{2(\log {n}+1)})\)-indistinguishability of \(\mathsf{G}_\mathsf{L}\) and \(\mathsf{G}_\mathsf{R}\) follows, where \(s_{\mathcal {P}}\) is the size of the algorithm that generates the set \(\mathcal {P}=\{\mathcal {P}_0,\dots ,\mathcal {P}_\ell \}\), and the \(n^{2(\log {n}+1)}\) factor results from the fact that the compressed set \(\mathcal{U}=\mathcal{E}^{\log {n}+1}\). Thus, the bound is improved considerably from exponential to quasi-polynomial. A more formal treatment is given in the full version [15].

4 Adaptive Secret Sharing for Monotone Circuits

Throughout history there have been many formulations of secret sharing schemes, each providing a different notion of correctness or security. We focus here on the computational setting and adapt the definitions of [21] for our purposes. Rogaway and Bellare [25] survey many different definitions, so we refer there for more information.

A computational secret sharing scheme involves a dealer who has a secret, a set of n parties, and a collection M of “qualified” subsets of parties called the access structure.

Definition 4

(Access structure). An access structure M on parties [n] is a monotone set of subsets of [n]. That is, \(M \subseteq 2^{[n]}\) and for all \(X\in M\) and \(X\subseteq X'\) it holds that \(X'\in M\).

We sometimes think of M as a characteristic function \(M:2^{[n]}\rightarrow \{0,1\}\) that outputs 1 on input X if and only if X is in the access structure. Here, we mostly consider access structures that can be described by a monotone Boolean circuit. These are directed acyclic graphs (DAGs) in which leaves are labeled by input variables and every internal node is labeled by an OR or AND operation. We assume that the circuit has fan-in \({k_{\mathsf {in}}}\) and fan-out (at most) \({k_{\mathsf {out}}}\). The computation is done in the natural way from the leaves to the root which corresponds to the output of the computation. A circuit in which every gate has fan-out \({k_{\mathsf {out}}}= 1\) is called a formula.

A secret sharing scheme for M is a method by which the dealer efficiently distributes shares to the parties such that (1) any subset in M can efficiently reconstruct the secret from its shares, and (2) any subset not in M cannot efficiently reveal any partial information on the secret. We denote by \(\varPi _i\) the share of party i and by \(\varPi _X\) the joint shares of parties \(X\subseteq [n]\).

Definition 5

(Secret sharing). Let \(M:2^{[n]}\rightarrow \{0,1\}\) be an access structure. A secret sharing scheme for M consists and secret space \(\mathcal S\) of efficient sharing and reconstruction procedures \(\mathsf {S}\) and \(\mathsf {R}\), respectively, that satisfy the following requirements:

  1. 1.

    \(\mathsf {S}(1^\lambda ,n,S)\) gets as input the unary representation of a security parameter, the number of parties and a secret \(S\in \mathcal S\), and generates a share for each party.

  2. 2.

    \(\mathsf {R}(1^\lambda ,\varPi _X)\) gets as input the unary representation of a security parameter, the shares of a subset of parties X, and outputs a string \(S'\).

  3. 3.

    Completeness: For a qualified set \(X \in M\) the reconstruction procedure \(\mathsf {R}\) outputs the shared secret:

    $$\begin{aligned} \Pr \left[ \mathsf {R}(1^\lambda ,\varPi _X) = S\right] = 1, \end{aligned}$$

    where the probability is over the randomness of the sharing procedure \(\varPi _1,\dots ,\varPi _n\leftarrow \mathsf {S}(1^\lambda , n, S)\).

  4. 4.

    Adaptive security: For every adversary \({\mathsf{A}}\) of size s it holds that

    $$\begin{aligned} |\Pr [\langle {\mathsf{A}},\mathsf{G}_0\rangle =1] - \Pr [\langle {\mathsf{A}},\mathsf{G}_1\rangle =1] | \le \epsilon , \end{aligned}$$

    where the challenger \(\mathsf{G}_b\) is defined as follows:

    1. (a)

      The adversary \({\mathsf{A}}\) specifies a secret \(S\in \mathcal S\).

      1. i.

        If \(b = 0\): the challenger generate shares \(\varPi _1,\dots ,\varPi _n\leftarrow \mathsf {S}(1^\lambda , n, S)\).

      2. ii.

        If \(b = 1\): the challenger samples a random \(S'\in \mathcal S\) and generate shares \(\varPi _1,\dots ,\varPi _n\leftarrow \mathsf {S}(1^\lambda , n, S')\).

    2. (b)

      The adversary adaptively specifies an index \(i\in [n]\) and if the set of parties he requested so far is unqualified, he gets back \(\varPi _i\), the share of the i-th party.

    3. (c)

      Finally, the adversary outputs a bit \(b'\), which is the output of the experiment.

The selective security variant is obtained by changing item 4b in the definition of the challenger \(\mathsf{G}_b\) so that the adversary first sends a commitment to the set of shares X he wants to see ahead of time before seeing any share. We denote this challenger by \( \mathsf{H}_b = \mathsf{SEL}_{2^{[n]}}[\mathsf{G}_b, X]\).

4.1 The Scheme of Yao

Here we describe the scheme of Yao (mentioned in [1], see also Vinod et al. [28]). The access structure M is given by a monotone Boolean circuit that is composed of AND and OR gates with fan-in \({k_{\mathsf {in}}}\) and fan-out (at most) \({k_{\mathsf {out}}}\). Each leaf in the circuit is associated with an input variable \(x_1,\dots ,x_n\) (there may be multiple inputs corresponding to the same input variable). During the sharing process, each wire in the circuit is assigned a label and the shares of party \(i\in [n]\) corresponds to the labels of the wires corresponding to the input variable \(x_i\). The sharing is done from the output wire to the leaves. The reconstruction is done in reverse: using the shares of the parties (that correspond to labels of the input wires), we recover the label of the output wire which will correspond to the secret.

The scheme \((\mathsf {S},\mathsf {R})\) uses a symmetric-key encryption scheme \(\mathsf {SKE}=(\mathsf {Enc},\mathsf {Dec})\) in which keys are uniformly random strings in \(\{0,1\}^\lambda \) and is \(\epsilon \)-secure: any polynomial-time adversary cannot distinguish the encryption of \(m_1\in \{0,1\}^\lambda \) from an encryption of \(m_2\in \{0,1\}^\lambda \) with probability larger than \(\epsilon \). The sharing procedure \(\mathsf {S}\) is described in Fig. 4.

Fig. 4.
figure 4

Yao’s secret sharing scheme \((\mathsf {S},\mathsf {R})\) for an access structure M described by a monotone Boolean circuit.

The reconstruction procedure \(\mathsf {R}\) of the scheme is essentially applying the reverse operations from the leaves of the circuit to the root. Given the labels of the input wires of an AND gate g, we recover the key associated with g by applying a XOR operation on the labels of the input wires, and then recover the labels of the output wires by decrypting the corresponding ciphertexts. Given the labels of the input wires of an OR gate g, we recover the key associated with g by setting it to be the label of any input wire, and then recover the labels of the output wires by decrypting the corresponding ciphertexts. The label of the output wire of the root gate is the recovered secret.

The scheme is efficient in the sense that the share size of each party is bounded by \({k_{\mathsf {out}}}\cdot \lambda \cdot s\), where s is the number of gates in the circuit. So, if the circuit is of polynomial-size (in n), then the share size is also polynomial (in n and in the security parameter).

Correctness of the scheme follows by an induction on the depth of the circuit and we omit further details here. Vinod et al. [28] proved that this schemeFootnote 12 is selectively secure by a sequence of roughly s hybrid arguments, where s is the number of gates in the circuit representation of M. By the basic random guessing lemma (Lemma 1), this scheme is also adaptively secure but the security loss is exponential in the number of players the adversary requests to see. The later can be exponential in O(n) so for our scheme to be adaptively secure, we need the encryption scheme to be exponentially secure.

Theorem 3

[28]. Assume that \(\mathsf {SKE}\) is a \(\epsilon \)-secure symmetric-key encryption scheme. Then, for any polynomial-time adversary \({\mathsf{A}}\) and any access structure on n parties described by a monotone circuit with s gates, it holds that

$$\begin{aligned} |\Pr [\langle {\mathsf{A}}, \mathsf{H}_0\rangle =1] - \Pr [\langle {\mathsf{A}}, \mathsf{H}_1\rangle =1] | \le {k_{\mathsf {out}}}\cdot s \cdot \epsilon , \end{aligned}$$

and (using Lemma 1),

$$\begin{aligned} |\Pr [\langle {\mathsf{A}},\mathsf{G}_0\rangle =1] - \Pr [\langle {\mathsf{A}},\mathsf{G}_1\rangle =1] | \le 2^n \cdot {k_{\mathsf {out}}}\cdot s \cdot \epsilon , \end{aligned}$$

In the following subsection we prove that the scheme is adaptively secure and the security loss is roughly \(2^{d\cdot \log s}\), where d and s are the depth and number of gates, respectively, in the circuit representing the access structure.

Theorem 4

Assume that \(\mathsf {SKE}\) is \(\epsilon \)-secure. Then, for any polynomial-time adversary \({\mathsf{A}}\) and any access structure on n parties described by a monotone circuit of depth d and s gates with fan-in \({k_{\mathsf {in}}}\) and fan-out \({k_{\mathsf {out}}}\), it holds that

$$\begin{aligned} |\Pr [\langle {\mathsf{A}},\mathsf{G}_0\rangle =1] - \Pr [\langle {\mathsf{A}},\mathsf{G}_1\rangle =1] | \le 2^{d\cdot (\log s + \log {k_{\mathsf {in}}}+ 2)} \cdot (2{k_{\mathsf {in}}})^{2d}\cdot {k_{\mathsf {out}}}\cdot \epsilon . \end{aligned}$$

4.2 Hybrids and Pebbling Configurations

To prove Theorem 4 we rely on the framework introduced in Theorem 2 that we briefly recall here. Our goal is to prove that an adversary cannot distinguish the challengers \(\mathsf{G}_\mathsf{L}= \mathsf{G}_0\) and \(\mathsf{G}_\mathsf{R}=\mathsf{G}_1\), corresponding to the adaptive game. We define the selective version of the games \( \mathsf{H}_\mathsf{L}= \mathsf{SEL}_{2^{[n]}}[\mathsf{G}_\mathsf{L}, X]\) and \( \mathsf{H}_\mathsf{R}= \mathsf{SEL}_{2^{[n]}}[\mathsf{G}_\mathsf{R}, X]\), where the adversary has to commit to the whole set of shares it wished to see ahead of time. We construct a sequence of \(\ell \) selective hybrid games \( \mathsf{H}_\mathsf{L}= \mathsf{H}_0, \mathsf{H}_1, \dots , \mathsf{H}_{\ell -1} , \mathsf{H}_\ell = \mathsf{H}_\mathsf{R}\). For each \( \mathsf{H}_i\) we define two selective games \(\mathsf{\hat{H}}_{i,0}\) and \(\mathsf{\hat{H}}_{i,1}\) and show that for every \(i\in \{0,\dots ,\ell -1\}\), there exists a mapping \(h_i\) such that the games \( \mathsf{H}_{i+b}\) and \(\mathsf{\hat{H}}_{i,b}\) (for \(b\in \{0,1\}\)) are equivalent up to the encoding of the inputs to the games (given by \(h_i\)). Then, we can apply Theorem 2 and obtain our result.

The Fully-Selective Hybrids. The sequence of fully selective hybrids \( \mathsf{H}_\mathsf{L}= \mathsf{H}_0, \mathsf{H}_1, \dots , \mathsf{H}_{\ell -1} , \mathsf{H}_\ell = \mathsf{H}_\mathsf{R}\) is defined such that each experiment is associated with a pebbling configuration. In a pebbling configuration, each gate is either pebbled or unpebbled. A configuration is specified by a compressed string that fully specifies the names of the gates which have a pebble on them (and the rest of the gates implicitly do not). We will define the possible pebbling configurations later but for now let us denote by Q the number of possible pebbling configurations.

We define for every \(j\in [Q]\), a hybrid experiment \( \mathsf{H}_j\) in which the adversary first commits to the set X of parties it wishes to see their shares, and then the challenger executes a new sharing procedure \(\mathsf {S}^j\) that depends on the j-th pebbling configuration. Roughly, this sharing procedure acts exactly as the original sharing procedure \(\mathsf {S}\), but whenever it encounters a gate with a pebble it generates bogus ciphertexts rather than the real ones. This sharing procedure is described in Fig. 5.

Fig. 5.
figure 5

The sharing procedure \(\mathsf {S}^j\) for an access structure M, described by a monotone Boolean circuit, and the j-th pebbling configuration which encodes the color of the pebble on each gates.

Observe that the hybrid that corresponds to the configuration in which all gates are unpebbled is identical to the experiment \( \mathsf{H}_\mathsf{L}\) and the configuration in which there is a pebble only on the root gate corresponds to the experiment \( \mathsf{H}_\mathsf{R}\).

Pebbling Rules and Strategies. The rules of the pebbling game depend on the subset of parties whose shares the adversary sees. The rules are:

  1. 1.

    Can place or remove a pebble on any AND gate for which (at least) one input wire is either not in X or comes out of a gate with a pebble on it.

  2. 2.

    Can place or remove a pebble on any OR gate for which all of the incoming wires are either input wires not in X or come out of gates all of which have pebbles on them.

Our goal is to find a sequence of pebbling rules so that starting with the initial configuration (in which there are no pebbles at all) will end up with a pebbling configuration in which only the root has a pebble. Jumping ahead, we would like for the sequence of pebbling rules to have the property that each configuration is as short to describe as possible (i.e., minimize Q). One way to achieve this is to have at any configuration along the way, as few pebbles as possible. An even more succinct representation can be obtained if we allow many pebbles but have a way to succinctly represent their location. This is what we achieve in the following lemma.

Lemma 2

For every subset of parties X and any monotone circuit of depth d, fan-in \({k_{\mathsf {in}}}\), and s gates, there exists a sequence of \((2{k_{\mathsf {in}}})^{2d}\) pebbling rules such that every pebbling configuration can be uniquely described by at most \(d\cdot (\log s + \log {k_{\mathsf {in}}}+ 1)\) bits.

Proof

A pebbling configuration is described by a list of pairs (gate name, counter), where the counter is a number between 1 and \({k_{\mathsf {in}}}\), and another bit b to specify whether the root gate has a pebble or not. The counter will represent the number of predecessors, ordered from left to right, that have a pebble on them. Any encoding uniquely defines a pebbling configuration (but notice that the converse is not true).

Denote by \(T_X(d)\) the number of pebbling rules needed (i.e., the length of the sequence) and by \(P_X(d)\) the maximum size of the description of the pebbling configuration during the sequence. The sequence of pebbling rules is defined via a recursive procedure in the depth d. We first pebble each of the \({k_{\mathsf {in}}}\) predecessors of the root from left to right and add a pair (root gate, counter) to the configuration. After we finish pebbling each predecessor we increase the counter by 1 to keep track of how many predecessors have been pebbled. To pebble all predecessors we used \({k_{\mathsf {in}}}\cdot T_X(d-1)\) pebbling rules and the maximal size of a configuration is at most \(P_X(d-1) + (\log s + \log {k_{\mathsf {in}}}+ 1)\). The \(\log s\) term comes from specifying the name of the root gate, the \(\log {k_{\mathsf {in}}}\) term come from the number of predecessors of the root gate that have a pebble on them, and the single bit is to signal whether the root gate is pebbled or not.

After this recursive pebbling each of the predecessors have a pebble only at their root gate and the root (of the depth d circuit) has no pebble. Now, we need to remove the pebble from the root of every predecessor of the root gate and put a pebble on the root gate. For the latter we apply one pebbling rule and put a pebble on the root gate. To remove the pebbles from the predecessors of the root gate we reverse the recursive pebbling procedure (by “unpebbling” from right to left and updating the counter appropriately), resulting in the application of additional \({k_{\mathsf {in}}}\cdot T_X(d-1)\) pebbling rules. When we finish unpebbling, since the root has no predecessors with pebbles, we remove from the description of the configuration the pair corresponding to the root gate. Thus, we get that the maximum size of a pebbling configuration at any point in time is

$$\begin{aligned} P_X(d) \le P_X(d-1) + (\log s + \log {k_{\mathsf {in}}}+ 1) \Rightarrow P_X(d) \le d\cdot (\log s + \log {k_{\mathsf {in}}}+ 1). \end{aligned}$$

The total number of pebbling rules we apply is

$$\begin{aligned} T_X(d) \le 2{k_{\mathsf {in}}}\cdot T_X(d-1)+1 \Rightarrow T_X(d) \le (2{k_{\mathsf {in}}})^{2 d}. \end{aligned}$$

This completes the proof of the lemma.

Recall that we denote by Q the number of possible pebbling configurations. Using the pebbling strategy from Lemma 2, we get that

$$\begin{aligned} Q\le 2^{d\cdot (\log s + \log {k_{\mathsf {in}}}+ 1)}. \end{aligned}$$

The Partially-Selective Hybrids. We define the partially selective hybrids \(\mathsf{\hat{H}}_{j,0}\) and \(\mathsf{\hat{H}}_{j,1}\) for every \( \mathsf{H}_j\) and \(j\in [Q]\). In both hybrid games \(\mathsf{\hat{H}}_{j,0}\) and \(\mathsf{\hat{H}}_{j,1}\), the adversary first commits to the j-th pebbling configuration and the next pebbling rule to apply. Denote by \(j'\in [Q]\) the index of the pebbling configuration resulting from applying the next configuration rule to the j-th pebbling configuration. In \(\mathsf{\hat{H}}_{j,0}\) the challenger samples the shares from \(\mathsf {S}^j\) and in \(\mathsf{\hat{H}}_{j,1}\) the challenger samples the shares from \(\mathsf {S}^{j'}\) (but other than this the games do not change).

Denote by \(\mathcal{U}\) the space of messages that the adversary has to commit in the partially selective hybrids \(\mathsf{\hat{H}}_{j,b}\). This space includes all tuples of pebbling configurations and an additional valid pebbling rule. First, recall that there are Q possible pebbling configurations. Seocnd, observe that a pebbling rule can be described by a gate name: a pebbling rule is just flipping the color of the pebble on that gate. For a circuit with s gates this requires additional \(\log s\) bits. Thus, \(\mathcal{U}= \{ (i,g) \mid i \in [Q], g\in [s]\}\) and this means that the size of \(\mathcal{U}\) is bounded by

$$\begin{aligned} |\mathcal{U}| \le Q\cdot s \le 2^{d\cdot (\log s + \log {k_{\mathsf {in}}}+ 1)}\cdot s. \end{aligned}$$

By semantic security of the symmetric-key encryption scheme and the fact that we replace \({k_{\mathsf {out}}}\) ciphertexts with bogus ones, we have that the games \(\mathsf{\hat{H}}_{j,0}\) and \(\mathsf{\hat{H}}_{j,1}\) are indistinguishable. The proof is by planting the challenge ciphertext as the ciphertext in the gate where the “next pebbling rule” is applied. In \(\mathsf{\hat{H}}_{j,0}\) it is a “real” ciphertext while in \(\mathsf{\hat{H}}_{j,1}\) it is a bogus one.

Lemma 3

Assume that \(\mathsf {SKE}\) is \(\epsilon \)-secure. Then, for any polynomial-time adversary \({\mathsf{A}}\) and any access structure on n parties described by a monotone circuit it holds that

$$\begin{aligned} |\Pr [\langle {\mathsf{A}},\mathsf{\hat{H}}_{j,0}\rangle =1] - \Pr [\langle {\mathsf{A}},\mathsf{\hat{H}}_{j,1}\rangle =1] | \le {k_{\mathsf {out}}}\cdot \epsilon . \end{aligned}$$

Applying Theorem 2 with the fact that \(\ell \le (2{k_{\mathsf {in}}})^{2d}\) and \(|\mathcal{U}| \le 2^{d\cdot (\log s + \log {k_{\mathsf {in}}}+ 1)}\cdot s\), we get that if \(\mathsf {SKE}\) is \(\epsilon \)-secure, then for any polynomial-time adversary \({\mathsf{A}}\) and any access structure on n parties described by a monotone circuit of depth d and s gates of fan-in \({k_{\mathsf {in}}}\) and fan-out \({k_{\mathsf {out}}}\), it holds that

$$\begin{aligned} |\Pr [\langle {\mathsf{A}},\mathsf{G}_0\rangle =1] - \Pr [\langle {\mathsf{A}},\mathsf{G}_1\rangle =1] |&\le 2^{d\cdot (\log s + \log {k_{\mathsf {in}}}+ 1)}\cdot s \cdot (2{k_{\mathsf {in}}})^{2d} \cdot {k_{\mathsf {out}}}\cdot \epsilon \\&\le 2^{d\cdot (\log s + \log {k_{\mathsf {in}}}+ 2)} \cdot (2{k_{\mathsf {in}}})^{2d}\cdot {k_{\mathsf {out}}}\cdot \epsilon . \end{aligned}$$

5 Open Problems

In this work we presented a framework for proving adaptive security of various schemes including secret sharing over access structures defined via monotone circuits, generalized selective decryption, constrained PRFs, and Yao’s garbled circuits. The most natural future direction is to find more applications where our framework can be used to prove adaptive security with better security loss than using the standard random guessing. Also, improving our results in terms of security loss is an open problem.

In all of our applications of the framework, the security loss of a scheme is captured by the existence of some pebbling strategy. Does there exist a connection in the opposite direction between the security loss of a scheme and possible pebbling strategies? That is, is it possible to use lower bounds for pebbling strategies to show that various security losses are necessary?