Keywords

1 Introduction

Zero-Knowledge proofs have been an important primitive in the theory of cryptography since their introduction three decades ago. The classical applications of zero-knowledge proofs are numerous, including for example identification schemes, electronic voting, verifiable outsourced computation, or CCA secure public-key encryption. The common denominator of all of these is that zk-proofs are used to prove simple statements, like “this ciphertext is well-formed” or “I know a valid signature key”. Although it was known that every NP statement could be proved in zero-knowledge [23], the cost of such general proofs was prohibitive and more sophisticated applications of zk-proofs were completely impractical.

This situation has changed radically in the last few years with the introduction of pairing-based zk-SNARKs [25]. The key element of these arguments is that they are succinct, in fact, they are constant size, i.e. independent of the witness size and thus, very fast to verify. This is extremely powerful: in particular, a prover can show that it has executed correctly some large computation (expressed as a huge circuit) and a verifier will be convinced after doing only very few checks (e.g. computing 3 pairings in [26]). Besides their scientific interest, SNARKs have opened the door to new real-world privacy-preserving applications. Cryptocurrencies like Zcash [6] or Ethereum [36] are two of the most popular examples so far.

However, even the most efficient instantiations of pairing-based SNARKs [26, 28] have a few drawbacks. On the efficiency side, the main ones are long common reference string and costly prover computation. On the security side, they are based on very strong hardness assumptions, and the setup is assumed to be trusted.

Recently, there are significant research efforts to propose alternatives which overcome some of these drawbacks following several dimensions. For instance, numerous works study how to reduce the trust in the common reference string, exploring weaker models such as subversion resistant SNARKs [1, 4, 17], updateable common reference strings [27] or transparent setup [5]. Although SNARKs are unbeatable in some facets, different tradeoffs are compelling depending on the application scenario.

One of the most celebrated alternatives to SNARKs are the arguments of knowledge for Arithmetic Circuit Satisfiability of Bootle et al. [10] (and Bulletproofs, the improvement thereof by Bünz et al. [12]). Their dependence on weaker assumptions (the DLOG assumption and the Random Oracle if one wants to remove interaction via Fiat-Shamir), the absence of a trusted setup and the logarithmic size of the proofs are some of its most attractive features. Unfortunately, verification time scales linearly, even when batching techniques are used. The motivation of this paper is to improve the cost of the verifier in the aforementioned works, while keeping most of its advantages.

1.1 Related Work

In [10], Bootle et al. proposed an interactive zero-knowledge argument at the heart of which lies a recursive argument for an inner product relation of committed values. The argument has very interesting properties, most notably it is transparent. The communication complexity is \(\mathcal {O_\lambda }(\log \left| \mathcal {C}\right| )\)Footnote 1 and the verification cost is \((\mathcal {O_\lambda }(\left| \mathcal {C}\right| ))\) which is the main drawback of the scheme, since verifying is asymptotically as costly as evaluating the circuit. Prover complexity is asymptotically optimal (\(\mathcal {O_\lambda }(\left| \mathcal {C}\right| )\)) but it heavily uses expensive public-key operations. Bünz et al. in [12] improved the concrete efficiency of the aforementioned protocol by a constant factor.

The Muggle-proofs based [37,38,39,40] proof systems build on the delegation scheme of Goldwasser, Kalai, and Rothblum [24]. These are very efficient schemes for low depth computation, whose verification and communication complexity depend on \(d\log W\), where d is the circuit depth, and W its width plus some communication overhead depending on the specific instantiation. Hyrax [37] is a DLOG-based transparent instantiation with an additional cost of \(\mathcal {O_\lambda }(\left| w\right| ^{\frac{1}{i}})\) for some i that can be fine-tuned. Recently, Libra [38] utilized and improved techniques from [14] to achieve an asymptotically optimal prover complexity and minimize public key operations. All these schemes need either a per-circuit setup or work for log-space uniform computations. Since they are inherently interactive they rely on the Fiat-Shamir transform to yield non interactive arguments.

Probabilistically Checkable Proofs (PCP) based constructions [5, 7] originate from the works of Kilian [31] and Micali [33], and are based on Interactive Oracle Proofs [8] which generalize the classical PCP proofs in the interactive setting. They are based on symmetric primitives which results in transparent, plausibly post-quantum secure constructions. The main drawback is that they are still concretely inefficient, especially as far as prover complexity is concerned. In the same family, [2, 22, 30] build on the MPC-in-the-head paradigm [29] and share similar properties. The most efficient one is Ligero [2] which, while having good concrete efficiency, has communication complexity \(\mathcal {O_\lambda }(\sqrt{\left| \mathcal {C}\right| })\) which can be bad for moderately large computations.

The line of work of Linear PCP constructions [16, 21, 26, 35] that originates from the seminal work of Gennaro et al. [21] and abstracted in [9], are the most efficient when considering verification time and communication. Their proof size is constant and the verification cost is \(\mathcal {O_\lambda }(\left| x\right| )\) where x is the public input. Note that this is optimal since the verifier has to, at least, read the statement to be proven. The main drawback is that they need a trusted setup.

To achieve a middle ground between efficiency and trust, Groth et al. [27] defined the Updateable model. In this model, everyone can non-interactively update the setup parameters. As long as one update is honest, soundness is guaranteed. The authors also presented a scheme which is updateable, but it has a universal common reference string of size quadratic in the maximal size of all supported circuits (although from the global setup a linear, circuit-specific string can be derived). Maller et al. presented Sonic [32], which improved this to a linear CRS by exploiting the reduction of [10]. Several works [15, 19, 20] have tried to improve the efficiency of Sonic concretely. However, all of these, including Sonic, are secure either in the Algebraic Group Model, or under knowledge type assumptions (apart from the Random Oracle Model). Recently, [13] uses the techniques of the aforementioned results to construct a SNARK sound in groups of unknown order. When instantiated in class groups it achieves a transparent setup and asymptotically improves over STARKS [5] by a logarithmic factor.

1.2 Our Contribution

We construct a public-coin Argument of Knowledge in the Universal Updateable Model based on the work of Bootle et al. [10]. The verification complexity is \(\mathcal {O_\lambda }(\left| x\right| +\log \left| \mathcal {C}\right| )\) and communication complexity is \(\mathcal {O_\lambda }(\log \left| \mathcal {C}\right| )\) where \(\left| x\right| \) is the public input size. The prover is linear in \(\left| \mathcal {C}\right| \) but, as in [10], it needs to perform a lot of public-key operations. The two constructions are secure, respectively, under one assumption which reduces to asymmetric DLOG and another one to asymmetric q-DLOG. They can be made non-interactive with the Fiat-Shamir heuristic. Updating and verifying updates need time \(\mathcal {O_\lambda }(\left| \mathcal {C}\right| )\), and communication complexity is \(\mathcal {O_\lambda }(\log \left| \mathcal {C}\right| )\) (which can be reduced to \(\mathcal {O_\lambda }(\log \log \left| \mathcal {C}\right| )\)) and \(\mathcal {O_\lambda }(1)\), respectively.

As far as we know, all recently proposed efficient and fully-succinct updateable schemes [15, 20, 32] rely on the Algebraic Group Model [18] or other Knowledge Type assumptions apart from the Random Oracle Model, while in our case the Random Oracle Model and a standard assumption is enough. However, the aforementioned schemes have a better communication complexity (\(\mathcal {O_\lambda }(1)\)) and, while asymptotically the verifier has the same complexity (\(\mathcal {O_\lambda }(\log \left| \mathcal {C}\right| )\)), in their case it works mainly on the field while ours works in the group, which is less efficient. Also, while the prover complexity in [15, 20, 32] is quasi-linear in \(\left| \mathcal {C}\right| \) and ours is linear, theirs works mainly in the field. We report some concrete numbers in Table 1 for the overhead of each scheme (we do not include concrete numbers for other schemes in communication and verification since they are constant while ours are logarithmic in \(\left| \mathcal {C}\right| \)).

Finally, we observe that the major overhead in the general proof system is the delegation of (public) computation regarding the circuit structure and so, for fixed languages that may be of interest, we can use the same techniques to achieve better efficiency. We demonstrate that by applying this in range proofs improving on [12]. The main overhead compared to it is that we move to bilinear groups instead of standard ones, but we exponentially reduce the verification complexity.

Table 1. Comparison of the updateable SNARKSs in terms of the most expensive operations (exponentiations and pairings). n is the number of multiplication gates, a is the number of addition gates, m is the number of wires in the circuit and M is a parameter, which determines the processed circuit’s fan-in and fan-out upper bound, and can be fine-tuned to balance the computations of the prover and verifier. \(n'\) is the size of the processed circuit which in the worst case is upper bounded by \(n+\frac{2m}{M-1}\). Sonic empirically assumes \(n'=3n\) for \(M=3\) in its reported numbers rather than a worst case analysis. P refers to pairing operations and \(E_1\) to \(\mathbb {G}_1\) exponentiations. We omit constant factors. Our prover is essentially only performing multi-exponentiations and we consider we need \(k\ \mathbb {G}_1\) exponentiations to do a k-multi-exponentiation, but we note that they can be implemented with o(k) exponentiations, see e.g. [10]. In the assumptions columns KT refers to Knowledge Type assumptions, AGM to the Algebraic Group Model and A-DLOG, q-A-DLOG to variants of DLOG and q-DLOG in the asymmetric group setting. All schemes are interactive and can be turned to non-interactive in the Random Oracle model.

1.3 Our Techniques

Distribution Parameterized Vector Commitments. We revisit the use of vector commitment schemes in zero-knowledge proof systems when working in groups: instead of using the classical Pedersen commitment key which is uniformly sampled, we add some limited structure which simultaneously allows more efficient representation of the key and efficient updateability. When combined with the properties of bilinear groups, only a compressed version of it is enough to allow a verifier to perform verification tasks exponentially faster.

In particular we propose two instantiations:

  • The commitment key consisting of group encodings of all monomials of a secret x, i.e., \([1],[x],[x^2],\ldots ,[x^{n-1}]\).

  • The commitment key consisting of group encodings of all multilinear monomials of a secret \(x_1,\ldots ,x_\nu \) i.e. \([1],[x_1],[x_2],[x_1x_2], \ldots ,[x_1x_2\cdots x_\nu ]\).

The structure of both commitment keys allows to non-interactively update the parameters and thus nullifying the trapdoors x or \(x_1,\ldots ,x_n\). We take advantage of this structure in bilinear groups to create compressed versions of these keys of size only \(\log n\). For various languages, this allows the verifier to verify statements with the help of the prover without reading the whole commitment key. This leads to exponentially faster verification of proofs with minimal overhead for the prover, at the price of moving to bilinear instead of plain DLOG groups.

Inner Program Argument with Logarithmic Verifier. Using these techniques, we modify the inner product protocol of Bootle et al. [10] for proving that for given commitments \(c_1=\textsf {Com} (\mathbf {a})\), \(c_2=\textsf {Com} (\mathbf {b})\) and \(z\in \mathbb {F}\), it holds that \(\mathbf {a}^\top \mathbf {b}=z\). More specifically, we note that the overhead of the verifier in [10] is computing a new commitment key in each of the \(\log n\) rounds of the protocol, where n is the vector dimension. This key depends on the previous key and the verifiers’ challenges. Instead of doing that, we only give the verifier the compressed key (which is logarithmic in n) and have the prover convince the verifier that the reduced statement is w.r.t. a new key which is the correct one.

Universally Updateable NIZK AoK. Having this powerful tool allows us to aggregate linear and quadratic constraints and thus prove general statements. We follow the techniques of [10] to reduce a statement about a circuit w.r.t. a public input to an inner product one (which need not be zero knowledge) and we can then use the improved inner product argument. More concretely, the prover convinces the verifier that \([\alpha ],[\beta ]\) are commitments to \(\mathbf {a},\mathbf {b}\) such that \(\mathbf {a}^\top \mathbf {b} = z\). The former vector depends on the witness and the latter on the circuit structure, is public, and both depend on a random challenge issued by the verifier.

However, computing \([\beta ]\) given universal parameters that work for any circuit (of bounded size) requires \(\mathcal {O_\lambda }(\left| \mathcal {C}\right| )\) time making verification linear in the computation size. To overcome this, we delegate this computation to the prover who gives a succinct proof for the correct computation of \([\beta ]\). To achieve that, we assume a specific structure for the circuit (basically that the gates have bounded fan-in and fan-out) and apply techniques similar to [32] adapted to our setting. These conditions can be imposed by pre-processing the circuit appropriately without asymptotically increasing the circuit size.

We note that when we have a fixed statement, we can make things much more efficient. The blueprint of the construction remains the same and we can appropriately fine-tune the parameter generation to avoid the delegation of computation of \([\beta ]\) thus achieving a concretely more efficient verifier. We show how this can be applied in Range Proofs, and reduce exponentially the verification complexity of the similar construction of [12].

2 Preliminaries

2.1 Notations

We write \(x\leftarrow S\) to denote uniformly sampling from S and assigning to x. When A is an algorithm we denote with \(y\leftarrow A(x)\) the assignment of the output of A with input x to y, where we uniformly sample randomness from A if it is probabilistic. We write A(xr) to explicitly refer to the randomness of A when needed. We notate with \(\mathcal {O_\lambda }(\cdot )\) asymptotic complexity that hides linear factors that depend on the security parameter \(\lambda \).

We denote vectors with boldface letters. If \(\mathbf {v}\) is a vector, we denote with normal font its components, that is \(v_i\) is its i-th component. We denote \(\mathbf {e_{n}} \in \mathbb {F}^n\) the n-th element of the canonical basis. The symbol \(\circ \) is used for denoting pairwise product, that is \(\mathbf {a}\circ \mathbf {b} = (a_1\cdot b_1,\ldots , a_n\cdot b_n)\).

Groups are written in additive notation and its elements are written implicitly: if we fix a generator \(g\in \mathbb {G}\), we denote with [r] the group element rg. We extend this notation to vectors of group elements by denoting \([\mathbf {r}]=([r_1],\ldots ,[r_n])\). In the bilinear group setting, given some fixed generators \(g_1,g_2,g_T=e(g_1,g_2)\), we use subscripts to specify the group. In this notation, \(e([r]_1,[s]_2) = [rs]_T\).

Let \(\mathbb {G}\) be a group of order q and \(\mathbf {r} = (r_1, \ldots , r_n)\in \mathbb {Z}_q^n, \mathbf {a} = (a_1, \ldots , a_n)\in \mathbb {Z}_q^n\). We denote \([\mathbf {a}^\top \mathbf {r}] = \sum _{i=1}^n a_i[r_i]\), that is, \([\mathbf {a}^\top \mathbf {r}]\) is a Vector Pedersen commitment of \(\mathbf {a}\) w.r.t. to commitment key \([\mathbf {r}]\). Given a vector \(\mathbf {r}=(r_1,\ldots ,r_{n})\), for even n, we denote \(\mathbf {r}_{\frac{1}{2}} = (r_1,\ldots ,r_{n/2})\) and \(\mathbf {r}_{\frac{2}{2}} = (r_{n/2+1},\ldots ,r_{n})\). We denote \(\mathbf {x^{n}} = (1,x,\ldots ,x^{n-1})\). Finally, let \(x_1,\ldots ,x_\nu \in \mathbb {Z}_q^n\). We denote as \(\mathbf {\overline{x}}\) the vector that is constructed recursively by setting \(\mathbf {\overline{x}}\leftarrow (1),\ \left\{ \mathbf {\overline{x}}\leftarrow (\mathbf {\overline{x}}, x_i\mathbf {\overline{x}})\right\} _{i\in [\nu ]}\). Basically, \(\mathbf {x^n}\) contains all the monomials of x up to degree \(n-1\), and \(\mathbf {\overline{x}}\) contains all the multilinear monomials where a “canonical” ordering has been imposed by its recursive definition.

2.2 (Zero Knowledge) Arguments

Interactive (Zero Knowledge) Arguments of Knowledge. We present the definitions and the relevant results we need for (Zero Knowledge) Arguments of Knowledge (ZKAoK). We follow the presentation of [10].

Let \(\mathcal {L}\in \mathbf{NP} \) be a language and \(\mathcal {R}_\mathcal {L}\) the corresponding relation for \(\mathcal {L}\). A ZKAoK allows a prover to convince a verifier of knowledge of a witness w certifying membership of a public x in \(\mathcal {L}\) that is \((x,w)\in \mathcal {R}_\mathcal {L}\). The zero knowledge property guarantees that the verifier learns nothing about the witness w apart from the fact that the prover knows such a witness.

Our final goal is a non-interactive argument, but we work in the interactive setting and then use standard techniques for transforming the interactive arguments to non-interactive.

Denote with \(\langle \mathcal {P}(x,w), \mathcal {V}(x)\rangle \) the transcript of an execution of \(\mathcal {P}\) and \(\mathcal {V}\) with respective inputs xw and x. Let \(\textsf {view} _{\mathcal {V}}\langle \mathcal {P}(x,w), \mathcal {V}(x)\rangle \) (\(\textsf {view} _{\mathcal {P}}\langle \mathcal {P}(x,w), \mathcal {V}(x)\rangle \)) be the views of \(\mathcal {V}\) (\(\mathcal {P}\)) in a protocol execution (i.e. the input, randomness and all incoming messages), and finally let \(\textsf {out} _{\mathcal {V}}\langle \mathcal {P}(x,w), \mathcal {V}(x)\rangle \) be the final verdict of the verifier (accept or reject).

Definition 1

The pair \(\langle \mathcal {P},\mathcal {V}\rangle \) is a Zero Knowledge Argument of Knowledge if it is public coin, it has perfect completeness, statistical witness extended emulation and perfect honest verifier zero Knowledge as defined next.

Definition 2

The pair \(\langle \mathcal {P},\mathcal {V}\rangle \) has Perfect Completeness if for all \((x,w)\in \mathcal {R}_\mathcal {L}\) it holds that \(\Pr \left[ \textsf {out} _{\mathcal {V}}\langle \mathcal {P}(x,w), \mathcal {V}(x)\rangle =1\right] = 1\).

Definition 3

The pair \(\langle \mathcal {P},\mathcal {V}\rangle \) has Statistical Witness Extended Emulation if for all deterministic polynomial \(\mathcal {P}^*\), there exists an expected polynomial time extractor \(\mathcal {E}\), such that for all (unbounded) adversaries \(\mathcal {A}\)

Definition 4

An \((n_1,\ldots ,n_\mu )\)-tree of accepting transcripts for the pair \(\langle \mathcal {P},\mathcal {V}\rangle \) with \(2\mu +1\) rounds is a tree where:

  • Each node of the tree in level i is labeled with the transcript of the protocol used up to \(\mathcal {V}\)’s i-th message.

  • Each node in the same level i is labeled with a transcript that uses fresh (uniformly distributed and independent) randomness for the verifier’s i-th challenges.

  • Level i has \(n_i\) descendants.

  • The leafs are labeled with transcripts that are accepted by the verifier.

Definition 5

The pair \(\langle \mathcal {P},\mathcal {V}\rangle \) has \((n_1,\ldots ,n_\mu )\)-generalized special soundness if there exists a PPT extractor \(\mathcal {E}\) such that given an \((n_1,\ldots ,n_\mu )\)-tree of accepting transcripts for the pair \(\langle \mathcal {P},\mathcal {V}\rangle ,\) the extractor \(\mathcal {E}\) outputs a valid witness for the statement.

Definition 6

An interactive proof system \(\langle \mathcal {P},\mathcal {V}\rangle \) is public coin if all messages from \(\mathcal {V}\) to \(\mathcal {P}\) are independent and uniformly distributed, and are uniquely defined by the randomness of the verifier alone.

Definition 7

A public coin interactive proof system \(\langle \mathcal {P},\mathcal {V}\rangle \) is perfect Honest Verifier Zero Knowledge (HVZK) if there exists a PPT simulator \(\mathcal {S}\), such that for all PPT \(\mathcal {A}\), it holds that

$$ \begin{aligned}&\Pr \left[ \begin{array}{c|r} 1 \leftarrow \mathcal {A}(tr) \;&\; (x,w,r)\leftarrow \mathcal {A}(1^\lambda ) \wedge tr\leftarrow \langle \mathcal {P}^*(x,w),\mathcal {V}(x;r)\rangle \wedge (x,w)\in \mathcal {R}_\mathcal {L}\end{array}\right] = \\&\Pr \left[ \begin{array}{c|r} 1 \leftarrow \mathcal {A}(tr) \;&\; (x,w,r)\leftarrow \mathcal {A}(1^\lambda ) \wedge tr\leftarrow \mathcal {S}(x,r) \wedge (x,w)\in \mathcal {R}_\mathcal {L}\end{array}\right] . \end{aligned} $$

Theorem 1

Let \(\langle \mathcal {P},\mathcal {V}\rangle \) be a \(2\mu +1\) round, public coin, interactive proof system with \((n_1,\ldots ,n_\mu )\)-generalized special soundness and \(\prod _{i=1}^\mu n_i = \mathcal {O}(\lambda ^c)\) for a constant c. Then \(\langle \mathcal {P},\mathcal {V}\rangle \) has witness extended emulation.

The proof of the theorem is given in [10].

Updateable Non-interactive (Zero Knowledge) Arguments of Knowledge. Informally, a non-interactive argument system in the common reference string model is a ZK argument with two rounds where the first is a setup round to create parameters that can be reused in many proofs. The most efficient constructions for general NP statements (e.g. Groth [26]) need a very expensive and inefficient trusted setup. To deal with this, Groth et al. [27] introduced the notion of an Updateable Setup where users can non-interactively update the parameters in a way that gives us the following guarantee: if an honest update takes place, then no PPT adversary can break soundness. We follow the model of Groth et al. [27], who show that for updateability it suffices to prove that an argument is secure in the following model.

  • The adversary creates setup parameters.

  • An honest update on these parameters takes place.

  • The adversary updates the parameters.

  • Circuit specific parameters are derived publicly for a circuit \(\mathcal {C}\).

  • Knowledge soundness is challenged w.r.t. these parameters.

We emphasize that the circuit-specific setup is done publicly: no secret is involved in it. Anyone can take the universal parameters, and deterministically compute the circuit-specific CRS. We present the definition of Updateable Non-Interactive (Zero Knowledge) Arguments of Knowledge.

Definition 8

An Updateable Non-Interactive (Zero Knowledge) Argument of Knowledge is a tuple of algorithms \((\textsf {USetup} \), \(\textsf {Update} \), \(\textsf {VrfySetup} \), \(\textsf {VrfyUpdate} \), \(\textsf {CircuitSetup} \), \(\textsf {Prove} \), \(\textsf {Vrfy} )\) where

  • \(\sigma \leftarrow \textsf {USetup} (1^\lambda , n)\): \(\textsf {USetup} \) takes as input the security parameter \(\lambda \) and an upper bound on the derived circuit size n, and outputs a universal CRS \(\sigma \).

  • \((\sigma ', \pi _{\sigma '}) \leftarrow \textsf {Update} (\sigma )\): \(\textsf {Update} \) takes as input a universal CRS \(\sigma \), and produces a new universal CRS \(\sigma '\) along with a proof of correct update \(\pi _{\sigma '}\).

  • \(0/1 \leftarrow \textsf {VrfySetup} (\sigma , 1^\lambda , n)\): \(\textsf {VrfySetup} \) takes as input a universal CRS \(\sigma \), the security parameter \(\lambda \) and n and outputs a bit indicating the correctness of the structure of the universal CRS.

  • \(0/1 \leftarrow \textsf {VrfyUpdate} (\sigma ',\sigma , \pi _{\sigma '})\): \(\textsf {VrfyUpdate} \) takes as input the new and old CRS \(\sigma '\) and \(\sigma \), and a proof \(\pi _\sigma '\), and outputs a bit indicating the correctness of the update.

  • \(\sigma _\mathcal {C}\leftarrow \textsf {CircuitSetup} (\sigma , \mathcal {C})\): is a deterministic algorithm that takes as input the description of a circuit with size bounded by n, and the universal CRS and outputs circuit specific parameters \(\sigma _\mathcal {C}\).

  • \(\pi \leftarrow \textsf {Prove} (\sigma _\mathcal {C},x,w)\): takes as input the CRS \(\sigma _\mathcal {C}\), the public and private input xw, and outputs a proof \(\pi \).

  • \(0/1 \leftarrow \textsf {Vrfy} (\sigma _\mathcal {C},x,\pi )\): takes as input the CRS \(\sigma _\mathcal {C}\), the public input x and a proof \(\pi \), and outputs a proof indicating its validity.

which is Perfectly Complete, Knowledge Sound and Statistically Zero Knowledge as defined next.

Definition 9

An Updateable Non-Interactive Argument of Knowledge is Perfectly Complete if for all \(\lambda ,n\)

$$ \Pr \left[ \begin{array}{c|r} \textsf {VrfySetup} (\sigma ,1^\lambda ,n) = 1 \;&\; \sigma \leftarrow \textsf {USetup} (1^\lambda , n) \end{array}\right] = 1, $$

for all \(\lambda ,n,\sigma \)

$$ \Pr \left[ \begin{array}{c|r} \textsf {VrfySetup} (\sigma ',1^\lambda ,n) = 1 \;\wedge \;&{}\; \textsf {VrfySetup} (\sigma ,1^\lambda ,n)=1\; \wedge \\ \textsf {VrfyUpdate} (\sigma ,\sigma ',\pi _{\sigma '}) = 1 \;&{}\; (\sigma ',\pi _{\sigma '}) \leftarrow \textsf {Update} (\sigma ) \end{array}\right] = 1 $$

and for all \(\lambda ,n,\sigma ,\mathcal {C},x,w\) where \(\mathcal {C}\) encodes a circuit of size bounded by n and \(\mathcal {R}_\mathcal {C}(x,w)=1\)

$$ \Pr \left[ \begin{array}{c|r} \;&{}\; \textsf {VrfySetup} (\sigma ,1^\lambda ,n)=1 \;\wedge \\ \textsf {Vrfy} (\sigma _\mathcal {C},x,\pi ) = 1 \;&{}\; \sigma _\mathcal {C}\leftarrow \textsf {CircuitSetup} (\sigma , \mathcal {C})\; \wedge \\ \;&{}\; \pi \leftarrow \textsf {Prove} (\sigma _\mathcal {C},x,w) \end{array}\right] = 1. $$

Definition 10

An Updateable Non-Interactive Argument of Knowledge is Knowledge Sound if for all stateful PPT adversaries \(\mathcal {A}=(\mathcal {A}_1,\mathcal {A}_2,\mathcal {A}_3)\), there exists an extractor \(\mathcal {E}_\mathcal {A}\), such that for all \(\lambda \), n, \(\mathcal {C}\) where \(\mathcal {C}\) is a circuit of size bounded by n

$$ \Pr \left[ \begin{array}{c|r} \;&{}\;(\sigma _1,st_1) \leftarrow \mathcal {A}_1(1^\lambda , n)\;\wedge \\ \textsf {VrfySetup} (\sigma _1,1^\lambda ,n)=1 \;\wedge \;&{}\;(\sigma _2, \pi _{\sigma _2}) \leftarrow \textsf {Update} (\sigma _1)\;\wedge \\ \textsf {VrfyUpdate} (\sigma _3,\sigma _2, \pi _{\sigma _3})=1 \;\wedge \;&{}\;(\sigma _3,\pi _{\sigma _3},st_2) \leftarrow \mathcal {A}_2(st_1, \sigma _2, \pi _{\sigma _2})\;\wedge \\ \textsf {Vrfy} (\sigma _\mathcal {C},x,\pi ) = 1 \;\wedge \;&{}\;\sigma _\mathcal {C}\leftarrow \textsf {CircuitSetup} (\sigma _3, \mathcal {C}) \;\wedge \\ \mathcal {C}(x,w)\ne 1 \;&{}\;(x,\pi )\leftarrow \mathcal {A}_3(st_2,\sigma _\mathcal {C};r) \;\wedge \\ \;&{}\;w\leftarrow \mathcal {E}(\sigma _\mathcal {C},x;r) \end{array}\right] \le \textsf {negl} (\lambda ). $$

Definition 11

An Updateable Non-Interactive Arguments of Knowledge is Statistically Zero knowledge in the Random Oracle model if there exists a pair of PPT algorithms \(\mathcal {S}_1, \mathcal {S}_2\), where \(\mathcal {S}_2\) is stateful, such that for all \(\mathcal {A}\), and for all circuits \(\mathcal {C}\) of size bounded by n, where \(\mathcal {C}\) takes as input a public value x and a private value w then

$$ \Pr \left[ \begin{array}{c|r} \;&{}\;b\leftarrow \left\{ 0,1\right\} \;\wedge \\ \;&{}\;\sigma \leftarrow \mathcal {A}^{H_b}(\textsf {setup} ,1^\lambda ,n) \;\wedge \\ b' = b \;&{}\;\textsf {VrfySetup} (\sigma ,1^\lambda ,n)=1 \;\wedge \\ \;&{}\;\sigma _\mathcal {C}\leftarrow \textsf {CircuitSetup} (\sigma , \mathcal {C})\;\wedge \\ \;&{}\;b'\leftarrow \mathcal {A}^{H_b,O_b}(\sigma _\mathcal {C}) \end{array}\right] \le \frac{1}{2} + \textsf {negl} (\lambda ) $$

where H is modeled as a Random Oracle and

$$ \begin{array}{l l} O_0(x,w) \leftarrow {\left\{ \begin{array}{ll} {\bot }, &{}\text { if } \mathcal {R}_\mathcal {C}(x, w)=0 \\ \textsf {Prove} (\sigma _\mathcal {C}, x, w), &{}\text { otherwise} \end{array}\right. },\qquad &{} H_0(m) \leftarrow H(m), \\ O_1(x,w) \leftarrow {\left\{ \begin{array}{ll} {\bot }, &{}\text { if } \mathcal {R}_\mathcal {C}(x, w)=0 \\ \mathcal {S}_1(\sigma _\mathcal {C}, x), &{}\text { otherwise} \end{array}\right. },\qquad&H_1(m) \leftarrow \mathcal {S}_2(m). \end{array} $$

Note that this definition considers adversarially created parameters, i.e. Subversion Resistant ZK [4].

From HVZK Interactive AoK to Non Interactive ZK AoK. It is well-known that we can use the Fiat-Shamir heuristic to transform any public coin Perfect HVZK interactive argument to a non-interactive full-fledged Statistical Zero Knowledge argument in the Random Oracle Model.

2.3 Updateable Commitment Schemes

We define commitment schemes which have an updateability property as well. We do this to simplify proofs in the following sections. An updateable commitment will be enough to guarantee updateability of all the protocols in this work, since all the arguments presented hold regardless of parameters unless there is a breach in the binding property of the commitment scheme.

Definition 12

An Updateable Commitment Scheme is a tuple of algorithms \((\textsf {Setup} ,\textsf {VrfySetup} ,\textsf {Update} ,\textsf {VrfyUpdate} ,\textsf {Com} ,\textsf {Open} )\) such that

  • \(\textsf {ck} \leftarrow \textsf {Setup} (1^\lambda , n)\) takes as input the security parameter \(\lambda \) and the vector dimension n, and outputs a commitment key \(\textsf {ck} \).

  • \((\textsf {ck} ', \pi _{\textsf {ck} '}) \leftarrow \textsf {Update} (\textsf {ck} )\): \(\textsf {Update} \) takes as input a commitment key \(\textsf {ck} \) and produces a new commitment key \(\textsf {ck} '\) and a proof of correct update \(\pi _{\textsf {ck} '}\).

  • \(0/1 \leftarrow \textsf {VrfySetup} (\textsf {ck} , 1^\lambda , n)\): \(\textsf {VrfySetup} \) takes as input a commitment key \(\textsf {ck} \), the security parameter \(\lambda \) and the dimension n, and outputs a bit indicating the correctness of the structure of the key.

  • \(0/1 \leftarrow \textsf {VrfyUpdate} (\textsf {ck} ',\textsf {ck} , \pi _{\textsf {ck} '})\): \(\textsf {VrfyUpdate} \) takes as input a new key \(\textsf {ck} '\), an old key \(\textsf {ck} \) and a proof \(\pi _{\textsf {ck} '}\), and outputs a bit indicating update correctness.

  • \((c,\tau ) \leftarrow \textsf {Com} (\textsf {ck} ,\mathbf {m})\) takes as input the commitment key and a message \(\mathbf {m}\in \mathcal {M}^n\), and outputs a commitment \(c\in \mathcal {C}\) and an opening trapdoor \(\tau \in \mathcal {T}\).

  • \(0/1 \leftarrow \textsf {Open} (\textsf {ck} ,c,\mathbf {m},\tau )\) takes as input the commitment key, the message and the opening trapdoor and outputs a bit indicating the validity of the opening.

which is Correct, Updateable Computationally Binding and Perfectly Hiding as defined next.

Definition 13

An Updateable Commitment Scheme is correct if for all \(\lambda ,n\)

$$ \Pr \left[ \begin{array}{c|r} \textsf {VrfySetup} (\textsf {ck} ,1^\lambda ,n) = 1 \;&\; \textsf {ck} \leftarrow \textsf {Setup} (1^\lambda , n) \end{array}\right] = 1, $$

for all \(\lambda ,n,\textsf {ck} \)

$$ \Pr \left[ \begin{array}{c|r} \textsf {VrfySetup} (\textsf {ck} ',1^\lambda ,n) = 1 \;\wedge \;&{}\; \textsf {VrfySetup} (\textsf {ck} ,1^\lambda ,n)=1\; \wedge \\ \textsf {VrfyUpdate} (\textsf {ck} ,\textsf {ck} ',\pi _{\textsf {ck} '}) = 1 \;&{}\; (\textsf {ck} ',\pi _{\textsf {ck} '}) \leftarrow \textsf {Update} (\textsf {ck} ) \end{array}\right] = 1 $$

and for all \(\lambda ,n,\textsf {ck} ,\mathbf {m}\)

Definition 14

An Updateable Commitment Scheme has the Updateable Computational Binding property if for all stateful PPT \(\mathcal {A}= (\mathcal {A}_1,\mathcal {A}_2,\mathcal {A}_3)\), and for all \(\lambda \), n

Definition 15

An Updateable Commitment Scheme is perfectly hiding if, for all \(\lambda ,n,\mathbf {m}\), and all \(\textsf {ck} \) s.t. \(\textsf {VrfySetup} (\textsf {ck} ,1^\lambda ,n) = 1\), and all \(c_1\)

$$ \Pr \left[ \begin{array}{l|r} c=c_1 \;&\; (c,\tau ) \leftarrow \textsf {Com} (\textsf {ck} ,\mathbf {m}) \end{array}\right] = \Pr \left[ \begin{array}{l|r} c=c_1 \;&\; c\leftarrow \mathcal {C}\end{array}\right] . $$

3 Assumptions

We present the assumptions used in this work.

Definition 16

(DLOG Assumption) The DLOG Assumption holds w.r.t. a group generator \(\textsf {GroupGen} \) if for all PPT adversaries \(\mathcal {A}\)

$$ \Pr \left[ \begin{array}{l|r} r=r' \;&\; \textsf {pp} \leftarrow \textsf {GroupGen} (1^\lambda )\; \wedge \;r\leftarrow \mathbb {Z}_q\; \wedge \; r'\leftarrow \mathcal {A}(\textsf {pp} ,[r]) \end{array}\right] \le \textsf {negl} (\lambda ). $$

We will also consider natural extensions of the DLOG Assumption. In the n-DLOG Assumption, the adversary receives n-powers of r, \([1],[r],\ldots ,[r^n]\). In the Asymmetric DLOG Assumption in asymmetric bilinear groups, the adversary receives r in both groups \([r]_1,[r]_2\). Similarly, in the asymmetric n-DLOG Assumption, the adversary receives the powers of r in both groups. In either case, its goal is to compute \(r \in \mathbb {Z}_q\).

The inner product argument of Bootle et al. [10] and the argument presented in this paper are based on the generalization of the DLOG Assumption presented next but with different vector distributions. The binding property of the vector commitments used in these arguments trivially reduces to this assumption.

Definition 17

Let \(n \in \mathbb {N}\). We call \(\mathcal {D}_{n}\) a vector distribution if it outputs in PPT time, with overwhelming probability vectors in \(\mathbb {Z}_q^{n}\).

In this paper, \(\mathcal {D}_{n}\) will typically be the distribution of the key of some perfectly hiding commitment scheme. More specifically, we will consider the distributions:

$$\begin{aligned} \mathcal {U}_{n}:\mathbf {r} = \left( 1, x_1, \ldots , x_{n-1}\right) , \qquad \mathcal {P}\mathcal {W}_{n}:\mathbf {r} = \left( 1, x, \ldots , x^{n-1} \right) , \end{aligned}$$
$$\mathcal {M}\mathcal {L}_{2^\nu }:\mathbf {r} = \left( 1 , x_1 , x_2, x_1\cdot x_2, \ldots , x_1\cdots x_\nu \right) , $$

where \(x, x_i \leftarrow \mathbb {Z}_q\). The first distribution is the uniform distribution, the second is the n-Power distribution and the last one is the multilinear monomial distribution with \(n=2^\nu \). Note that in the notation we introduced before, the power and multilinear monomial distribution can also be written as \(\mathcal {P}\mathcal {W}_{n}: \mathbf {r}=\mathbf {x^{n}}, x\leftarrow \mathbb {Z}_q\) and \(\mathcal {M}\mathcal {L}_{2^\nu }: \mathbf {r}= \mathbf {\overline{x}}, \mathbf {x} \leftarrow \mathbb {Z}_q^\nu \).

Definition 18

The \(\mathcal {D}_n\)-Find-Rep Assumption holds with respect to \(\textsf {GroupGen} \) for all polynomial time adversaries \(\mathcal {A}\)

It is well known that the \(\mathcal {U}_n\)-Find-Rep (resp. \(\mathcal {P}\mathcal {W}_n\)-Find-Rep) Assumption reduces to the DLOG (resp. q-DLOG) Assumption. For Multilinear Monomial distribution, we prove a similar result in Theorem 2. This assumption is inspired by the Naor-Reingold PRF [34].

In asymmetric bilinear groups, we define the Asymmetric \(\mathcal {D}_n\)-Find-Rep Assumption analogously except that the adversary receives \(\mathbf {r}\) in both source groups \(\mathbb {G}_1,\mathbb {G}_2\). We can prove similar reductions to asymmetric variants of the DLOG Assumption.

Theorem 2

If there exists an adversary that runs in time \(t(\lambda )\) and breaks the \(\mathcal {ML}_{2^\nu }\)-Find-Rep Assumption with probability \(\epsilon (\lambda )\) with respect to a group generator \(\textsf {BilGroupGen} (1^\lambda )\), then there exists an adversary that breaks the Asymmetric Discrete Logarithm Assumption relative to \(\textsf {BilGroupGen} (1^\lambda )\) in time \(\mathcal {O_\lambda }(2^\nu )+t(\lambda )\) with probability \(\frac{\epsilon (\lambda )}{\nu }\).

The proof of the theorem is presented in the full version.

4 Distribution Parameterized Vector Commitment

We can construct Updateable Commitment Schemes under the \(\mathcal {D}_n\)-Find-Rep assumptions we described. The \(\textsf {Setup} \) and \(\textsf {Com} \) are the same for all and they basically work as in the classical Pedersen Commitment.

We describe for the asymmetric \(\mathcal {M}\mathcal {L}_n,\mathcal {P}\mathcal {W}_n\) distributions the algorithms related to the update (note that for \(\mathcal {U}_n\), i.e. the Pedersen Vector Commitment, updateability trivially holds since the \(\textsf {Setup} \) is transparent). We present the \(\mathcal {M}\mathcal {L}_n\) case in detail and discuss which modifications are needed for the \(\mathcal {P}\mathcal {W}_n\) setting. For our application it is sufficient to give in \(\mathbb {G}_2\) only the elements that define the commitment key, and not the whole key vector, i.e. \([\mathbf {x}]_2\) such that \(\mathbf {r}=\mathbf {\overline{x}}\). Looking ahead, in the inner product argument \([\mathbf {x}]_2\) will be the compressed key the verifier has.

The update mechanism is fairly simple. To check a commitment key’s structure, simply assert the various DDH relations that are implied by the \(\mathcal {M}\mathcal {L}_n\) distribution, and to update, pick a vector from \(\mathcal {M}\mathcal {L}_n\) and multiply it pairwise with the current key. NIZK PoK are used to assert that the previous randomness is taken into account in the new key and to ensure that any party updating knows its contribution to the final commitment key.

figure a

Theorem 3

The \(\mathcal {M}\mathcal {L}_n\)-Find-Rep Commitment scheme is Updateably Computationally Binding under the \(\mathcal {M}\mathcal {L}_n\)-Find-Rep assumption, and the existence of a NIZK AoK for the relation \(\mathcal {R}= \left\{ \left( ([x], [x']), y\right) {{\,\mathrm{\mid }\,}}[x'] = y[x]\right\} \).

The proof of the theorem is presented in the full version.

We can use a transparent scheme such as [12] to prove that an update is correctly performed, which will yield \(\mathcal {O_\lambda }(\log \log n)\) proof size.

A similar construction works for the \(\mathcal {P}\mathcal {W}_n\) distribution. In this case, we simply need the element x encoded in \(\mathbb {G}_2\) since this is enough to check that the key is drawn from the \(\mathcal {P}\mathcal {W}_n\) distribution. That is, for each i, it is enough to check that \(e([r_i]_1,[1]_2)=e([r_{i-1}]_1,[x]_2)\). The Update and VrfyUpdate work in the same way but now a NIZK AoK is only needed for the element \([x]_2\).

As for concrete efficiency, the cost is dominated by the group exponentiations and the pairing operations for the verifier (the NIZK AoK statements are logarithmic in n). Setup and Update are dominated by n exponentiations in \(\mathbb {G}_1\), VrfySetup and VrfyUpdate by n pairing operations, and Com and Open by one multi-exponentiation of size n in \(\mathbb {G}_1\) which, if performed trivially needs n exponentiations. Proof size amounts to \(\log n\) proofs of the NIZK AoK in the \(\mathcal {M}\mathcal {L}_n\) case and 1 in the \(\mathcal {P}\mathcal {W}_n\) case.

4.1 Commitments to Monomial Vectors

We will need to efficiently compute special commitments in the proof systems we present later. Specifically, given commitment schemes under \(\mathcal {M}\mathcal {L}_{2^\nu }\) and \(\mathcal {P}\mathcal {W}_{2^\nu }\) we will need to compute (non-hiding) commitments to \(\mathbf {t^n}\) and \(\mathbf {\overline{t}}\) where we know t and \(t_1,\ldots ,t_\nu \), respectively. Of course, these computations can be performed in time linear in the vector dimension, but we want to do so in sublinear (logarithmic in n) time. Since the univariate case reduces to the multilinear one by setting \(t_i = t^{2^{i-1}}\), we only consider the most general case of computing \(\mathbf {\overline{t}}\) when the keys are drawn from the \(\mathcal {M}\mathcal {L}_{2^\nu }\) distribution. We will need this in two different settings:

  1. 1.

    In the first case, let \(\textsf {ck} = (\textsf {ck} _\mathcal {P},\textsf {ck} _\mathcal {V})\) be a commitment key. A prover, holding the whole commitment key \(\textsf {ck} _\mathcal {P}\), computes the commitment to \(\mathbf {\overline{t}}\) w.r.t. \(\textsf {ck} \), and gives it to a verifier, who holds only a compressed version of it, \(\textsf {ck} _\mathcal {V}\). It also gives a small proof that the issued commitment is a commitment to \(\mathbf {\overline{t}}\) w.r.t. \(\textsf {ck} \).

  2. 2.

    In the second case, given a commitment to \(\mathbf {1^n}\) w.r.t. some commitment key \(\textsf {ck} = (\textsf {ck} _\mathcal {P}, \textsf {ck} _\mathcal {V})\) (which can be precomputed once), the verifier derives a commitment to \(\mathbf {\overline{t}}\) w.r.t. a new commitment key \(\textsf {ck} ' = (\textsf {ck} _\mathcal {P}', \textsf {ck} _\mathcal {V}')\) in logarithmic time in n.

For the first case we use the following lemma:

Lemma 1

Let \(\textsf {ck} = (\textsf {pp} , [\mathbf {x}]_{2},[\mathbf {r}]_1)\) be a commitment key where \([\mathbf {r}]_1 = [\mathbf {\overline{x}}]\). Then \(\textsf {Com} _\textsf {ck} (\mathbf {\overline{t}}) = \prod _{i=1}^{\nu }(1+t_ix_i)[1]_1\).

Proof

We use induction on \(\nu \).

  • When \(\nu =1\), we have \(\overline{\mathbf {t}}=(1,t_1)\) and \(\overline{\mathbf {x}}=(1,x_1)\). We get

    $$ \textsf {Com} _\textsf {ck} (\mathbf {\overline{t}}) = [r_1]_1 + t_1[r_2]_1 = [1]_1 + t_1x_1[1]_1 = (1+ t_1x_1)[1]_1. $$
  • For \(\nu >1\), we have \([\mathbf {r_\nu }]_1=(\mathbf {\overline{x}_{\nu -1}}[1]_1,x_\nu \mathbf {\overline{x}_{\nu -1}}[1]_1)\) and \(\mathbf {\overline{t}} = (\mathbf {\overline{t}_{\nu -1}}, t_\nu \mathbf {\overline{t}_{\nu -1}})\) and

    $$ \begin{aligned} \textsf {Com} _\textsf {ck} (\mathbf {\overline{t}})&= [\mathbf {\overline{t}}^\top \mathbf {r_\nu }]_1 = [\mathbf {\overline{t}_{\nu -1}}^\top \mathbf {r_{\nu -1}}]_1 + [t_\nu \mathbf {\overline{t}_{\nu -1}}^\top x_\nu \mathbf {r_{\nu -1}}]_1 \\&=[\mathbf {\overline{t}_{\nu -1}}^\top \mathbf {r_{\nu -1}}]_1 + t_\nu x_\nu [\mathbf {\overline{t}_{\nu -1}}^\top \mathbf {r_{\nu -1}}]_1 \\&=(1+t_\nu x_\nu ) [\mathbf {\overline{t}_{\nu -1}}^\top \mathbf {r_{\nu -1}}]_1 \\&=(1+t_\nu x_\nu )\prod _{i=1}^{\nu -1}(1+t_ix_i)[1]_1, \end{aligned} $$

    where the last equality follows from the induction hypothesis. \(\blacksquare \)

We take advantage of this structure by having the prover sending, for all \(i\in \left\{ 1,\ldots ,\nu \right\} \), the elements

$$\begin{aligned}{}[\tau _i]_1 \leftarrow \prod _{j=1}^{i}(1+t_jx_j)[r]_1 = (1+t_{i}x_{i})[\tau _{i-1}]_1, \end{aligned}$$

where \([\tau _0]_1 = [1]_1\). The verifier can then use the pairing to check

$$\begin{aligned} e(t_i[\tau _{i-1}]_1,[x_i]_2) = e([\tau _{i}-\tau _{i-1}]_1,[1]_2). \end{aligned}$$

The prover needs to do \(\log n\) \(\mathbb {G}_1\) multi-exponentiations each of size \(2^i\) for \(i\in \left\{ 1,\ldots ,\frac{n}{2}\right\} \), which can be implemented with n \(\mathbb {G}_1\) exponentiations. The verifier needs to perform \(\log n\) pairing operations and \(2\log n\) \(\mathbb {G}_1\) exponentiations to verify.

For the second case, we do the following: suppose the verifier is given \(\textsf {Com} _{\textsf {ck} _1}(\mathbf {1}) = [\mathbf {1}^\top \mathbf {r}]_1\). The verifier and the prover can compute a new verification key \(\textsf {ck} _2\) as follows:

$$\begin{aligned} (\textsf {ck} _2^\mathcal {V},\textsf {ck} _2^\mathcal {P}) = (([r]_1, t_1^{-1}[x_1]_2,\ldots ,t_\nu ^{-1}[x_\nu ]_2),(\mathbf {r}\circ \overline{\mathbf {t}}^{-1})). \end{aligned}$$

Then, we have:

$$[\mathbf {1}^\top \mathbf {r}]_1 = [(\mathbf {1}\circ \overline{\mathbf {t}})^\top (\mathbf {r}\circ \overline{\mathbf {t}}^{-1})]_1 = [\overline{\mathbf {t}}^\top (\mathbf {r}\circ \overline{\mathbf {t}}^{-1})]_1 =\textsf {Com} _{\textsf {ck} _2}(\overline{\mathbf {t}}).$$

The verifier needs \(\log n\) \(\mathbb {G}_2\) exponentiations and the prover can implicitly hold its key without computing it: when it needs to commit to \(\mathbf {m}\) it can simply commit to \(\mathbf {m}\circ \mathbf {t}^{-1}\) thus saving in expensive group operations.

5 Improved Inner Product Argument

In this section, we will first provide a high-level description of the inner product argument of [10], which has linear verification cost. Next, in Subsect. 5.2 we briefly discuss how to reduce the verification complexity to logarithmic in the designated verifier setting in the CRS model by changing the distribution of the commitment keys (still under the DLOG Assumption). In asymmetric bilinear groups, the construction can be “compiled” to achieve public verifiability, as discussed in Subsect. 5.3.

5.1 Inner Product Argument

We first briefly present the Inner Product Argument of [10]. The argument is a Proof of Knowledge of the openings of two (non-hiding) Vector Pedersen Commitments that satisfy an inner product relation. In [10], keys are sampled from \(\mathcal {U}_n\). Formally, it is a proof of knowledge for the following language \(\mathcal {L}_\textsf {IP}\):

$$ \begin{aligned} (\textsf {pp} ,&[\mathbf {r}],[\mathbf {s}]\in \mathbb {G}^{2^\nu },[\alpha ],[\beta ]\in \mathbb {G},z\in \mathbb {Z}_q)\in \mathcal {L}_\textsf {IP}\Longleftrightarrow \\&\exists \mathbf {a},\mathbf {b}\in \mathbb {Z}_q^{2^\nu } \text { s.t. } [\alpha ] = [{\mathbf {a}}^\top \mathbf {r}] \wedge [\beta ] = [\mathbf {b}^\top \mathbf {s}] \wedge \mathbf {a}^\top \mathbf {b} = z. \end{aligned} $$

The idea of the protocol is to reduce this statement to an equivalent one of roughly half the size.

To do that, we create new commitment keys which have size half of the original one by splitting them in half and then combining them to a new key based on a challenge issued by the verifier. That is, the new commitment key will be \([\mathbf {r'}] = c^{-1}[\mathbf {r}_{\frac{1}{2}}]+c^{-2}[\mathbf {r}_{\frac{2}{2}}]\), where c is the verifier’s challenge.

In order to prevent the prover from taking advantage of the split, we first ask her to give partial commitments \([\alpha _{-1}] = [\mathbf {a}_{\frac{1}{2}}^\top \mathbf {r}_{\frac{2}{2}}]\), \([\alpha _{1}] = [\mathbf {a}_{\frac{2}{2}}^\top \mathbf {r}_{\frac{1}{2}}]\).

The new witness will be \(\mathbf {a'} = c\mathbf {a}_{\frac{1}{2}}+c^2\mathbf {a}_{\frac{2}{2}}\). Note that both prover and verifier can compute the commitment to this new value, for every challenge c, from the partial commitments as follows:

$$ \begin{aligned}{}[\alpha ']&= [\mathbf {a'}^\top \mathbf {r'}]=[(\mathbf {a}_{\frac{1}{2}}c + \mathbf {a}_{\frac{2}{2}}c^2)^\top (c^{-1}\mathbf {r}_{\frac{1}{2}}+c^{-2}\mathbf {r}_{\frac{2}{2}})] \\&= [\mathbf {a}_{\frac{1}{2}}^\top \mathbf {r}_{\frac{1}{2}}]+[\mathbf {a}_{\frac{2}{2}}^\top \mathbf {r}_{\frac{2}{2}}] +c^{-1}[\mathbf {a}_{\frac{1}{2}}^\top \mathbf {r}_{\frac{2}{2}}]+c[\mathbf {a}_{\frac{2}{2}}^\top \mathbf {r}_{\frac{1}{2}}] \\&= [\alpha ]+ c^{-1}[\alpha _{-1}] + c[\alpha _{1}]. \end{aligned} $$

The same procedure is done for the second commitment \([\beta ] = [\mathbf {b}^\top \mathbf {s}]\) with the inverse challenge \(c^{-1}\).

Finally, the prover sends before seeing the challenge c the values \(z_{-1} = \mathbf {a}_{\frac{2}{2}}^\top \mathbf {b}_{\frac{1}{2}}\) and \(z_{1} = \mathbf {a}_{\frac{1}{2}}^\top \mathbf {b}_{\frac{2}{2}}\), and based on these, the new inner product is computed as \(z' = z_{-1}c+z+z_1c^{-1}\). The new statement becomes \((\textsf {pp} , [\mathbf {r'}],[\mathbf {s'}],[\alpha '],[\beta '],z')\in \mathcal {L}_\textsf {IP}\).

Straightforward calculations assert that the new witness is indeed a witness for the new statement. The prover can now simply send the new witness \(\mathbf {a'},\mathbf {b'}\) with cost half of what it would take to send \(\mathbf {a},\mathbf {b}\).

To achieve logarithmic complexity, the prover and the verifier recursively proceed in reducing the statement size until it is constant. The prover finally sends the witness. Under the generalized forking lemma the protocol remains sound.

We formally present the protocol next.

figure b

5.2 DV Inner Product Argument with Logarithmic Verifier

In this section we give the intuition on how to modify the above protocol with a \(\mathcal {D}_n\)-variant of the commitment scheme to achieve a logarithmic verifier. Full details are only given for the public verifiable scheme, which is very similar.

The linear overhead in the verifier’s computation is computing the new key \(\mathbf {r'}\). Having a structured commitment key allows to make this computation implicit for the verifier. If \(\mathbf {r} \leftarrow \mathcal {ML}_{n}\), then \(\mathbf {r}=(\mathbf {r}_{\frac{1}{2}},\mathbf {r}_{\frac{2}{2}})=(\mathbf {r}_{\frac{1}{2}},x_{\nu }\mathbf {r}_{\frac{1}{2}})\). So, in the first round, the key for the next round is

$$\begin{aligned}{}[\mathbf {r'}] = c^{-1}[\mathbf {r_{\frac{1}{2}}}]+c^{-2}[\mathbf {r_{\frac{2}{2}}}] = (c^{-1}+x_\nu c^{-2}) [\mathbf {r_{\frac{1}{2}}}]. \end{aligned}$$

The new key is now determined by \([x_1],\ldots ,[x_{\nu -1}]\) and the new generator \((c^{-1}+x_\nu c^{-2})[1]\). Further, this transformation respects the structure of the key, which can again be written as \(\mathbf {r}'=(\mathbf {r}'_{\frac{1}{2}},x_{\nu -1}\mathbf {r}'_{\frac{1}{2}})\), so the same argument can be applied again.

In the designated verifier case, we let the verifier know \(x_1,\ldots ,x_\nu \). It does not compute or read \([\mathbf {r}']\) in each round but just checks in the last round if:

$$\begin{aligned}{}[r'] = \prod _{i=1}^{\nu } (c_i^{-1}+x_{\nu -i+1} c_i^{-2}) [1], \end{aligned}$$

where \(c_i\) is the challenge at round i, and \([r']\) is the key in the last round (consisting of 1 element). The same holds for the second key \([s']\). Therefore, verification requires a logarithmic number of operations.

When \(\mathbf {r} \leftarrow \mathcal {PW}_n\), the verification can also be reduced to logarithmic, as the structure of the key is very similar, namely, \(\mathbf {r}=(\mathbf {r}_{\frac{1}{2}},\mathbf {r}_{\frac{2}{2}})=(\mathbf {r}_{\frac{1}{2}},x^{2^{\nu -1}} \mathbf {r}_{\frac{1}{2}})\). The \(\mathcal {P}\mathcal {W}_{2^\nu }\) can be seen as a special case where \(x_i=x^{2^{i-1}}\).

5.3 Inner Product Argument with Logarithmic Verifier

To allow public verifiability, we work in asymmetric bilinear groups. The verifier can no longer compute

$$\begin{aligned} \prod _{i=1}^{\nu } (c_i^{-1}+x_{\nu -i+1} c_i^{-2}) [1], \end{aligned}$$

but it lets the prover compute the intermediate values in each round (which it can compute without knowledge of \(x_i\)), and the verifier uses the pairing as a DDH oracle to verify this claim.

We now present the argument formally for the \(\mathcal {M}\mathcal {L}_{2^\nu }\) distribution (for \(\mathcal {PW}_{n}\) the argument is defined similarly and we omit the details). First, we define the language of well structured commitments. We include the generator since it will be modified in each round.

$$ \begin{aligned} (\textsf {pp} ,&[r]_1, [\mathbf {r}]_1, [\mathbf {x}]_2 ) \in \mathcal {L}_\textsf {Com} ^{\mathcal {M}\mathcal {L}_{2^\nu }} \Longleftrightarrow \\ [r_1]_1=[r]_1 \wedge \forall i\in&\left\{ 1,\ldots ,\nu \right\} \forall j\in \left\{ 1,\ldots ,2^{i-1}\right\} [r_{2^{i-1}+j}]_1=x_i[r_j]_1. \end{aligned} $$

The language to be proven and the reduction step are presented next.

$$ \begin{aligned} (\textsf {pp} ,&[r]_1, [s]_1, [\mathbf {x}]_2, [\mathbf {y}]_2, [\alpha ]_1,[\beta ]_1,z)\in \mathcal {L}_\textsf {IP}\Longleftrightarrow \\&\exists \; [\mathbf {r}]_1, [\mathbf {s}]_1\in \mathbb {G}^{2^{\nu }}, \mathbf {a},\mathbf {b}\in \mathbb {Z}_q^{2^{\nu }} \text { s.t. } \\&(\textsf {pp} , [r]_1, [\mathbf {r}]_1, [\mathbf {x}]_2) \in \mathcal {L}_\textsf {Com} ^{\mathcal {M}\mathcal {L}_{2^\nu }} \wedge (\textsf {pp} , [s]_1, [\mathbf {s}]_1, [\mathbf {y}]_2) \in \mathcal {L}_\textsf {Com} ^{\mathcal {M}\mathcal {L}_{2^\nu }} \wedge \\&[\alpha ]_1 = [\mathbf {a}^\top \mathbf {r}]_1 \wedge [\beta ]_1 = [\mathbf {b}^\top \mathbf {s}]_1 \wedge \mathbf {a}^\top \mathbf {b}= z. \end{aligned} $$
figure c

Theorem 4

The protocol presented is a Public Coin, Argument of Knowledge for the relation \(\mathcal {L}_\textsf {IP}\) with \(\log n\) round complexity, \(\mathcal {O_\lambda }(n)\) prover complexity, and \(\mathcal {O_\lambda }(\log n)\) communication and verification complexity under either the \(\mathcal {M}\mathcal {L}_{n}\)-Find-Rep or the \(\mathcal {P}\mathcal {W}_{n}\)-Find-Rep assumptions. The argument yields a Universally Updateable Non-Interactive AoK in the Random Oracle model. In the former case the proof size of an update is \(\mathcal {O_\lambda }(\log n)\) and in the latter \(\mathcal {O_\lambda }(1)\).

Proof

Completeness: We show that each reduction round leads to a valid reduced statement. It is enough to show that the prover and verifier compute the same key. Then, we can argue as in the case with uniform keys.

First, note that \([\mathbf {r}']_1 = c^{-1}[\mathbf {r_{\frac{1}{2}}}]_1 + c^{-2}[\mathbf {r_{\frac{2}{2}}}]_1\), which means that we “combine” all pair of elements that have distance \(2^{\nu -1}\). That is, for all \(j\le 2^{\nu -1}\),

$$\begin{aligned}{}[r_j']_1 = c^{-1}[r_j]_1 + c^{-2}[r_{2^{\nu -1}+j}]_1. \end{aligned}$$

Also, note that, by construction of the commitment keys for all \(i\in \left\{ 1,\ldots ,\nu \right\} \) and \(j\in \left\{ 1,\ldots ,2^{i-1}\right\} \), it holds that \([r_{2^{i-1}+j}]_1 = x_i [r_{j}]_1\), which means that \([r']_1= [r_1']_1 = c^{-1}[r_1]_1 + c^{-2}[r_{2^{\nu -1}+1}]_1 = c^{-1}[r]_1 + c^{-2}x_{\nu }[r]_1\) and the verifier always accepts the pairing test.

It remains to show that \((\textsf {pp} , [r']_1, [\mathbf {r'}]_1, [\mathbf {x}']_2) \in \mathcal {L}_\textsf {Com} \). It is evident that \([r'_1]_1 = [r']_1\). We show that the various Diffie-Hellman Relations hold for the reduced statement.

Let \(i\in \left\{ 1,\ldots ,\nu -1\right\} \) and \(j\in \left\{ 1,\ldots ,2^{i-1}\right\} \). It holds that \([r_{2^{i-1}+j}']_1 = x_i[r_j']_1\). Indeed,

$$ \begin{aligned}{}[r_{2^{i-1}+j}']_1&= c^{-1}[r_{2^{i-1}+j}]_1+c^{-2}[r_{2^{\nu -1}+2^{i-1}+j}]_1 = c^{-1}x_i[r_{j}]_1+x_\nu x_i c^{-2}[r_{j}]_1 \\&= x_i(c^{-1}[r_{j}]_1+x_\nu c^{-2}[r_{j}]_1) = x_i[r_{j}']_1. \end{aligned} $$

Similar calculations show the part related to \(\mathbf {s'}\). We can now argue completeness exactly as in the \(\mathcal {U}_{2^\nu }\) case.

Witness extended emulation: For witness extended emulation we need to prove that, for each round, we can extract the witness, i.e. the commitment key and the commitment openings w.r.t. it. We show next how to extract the commitment keys. After having these, we can argue as in [10] except that we use the corresponding \(\mathcal {D}_n\)-Find-Rep Assumption.

Assume we get two accepting transcripts for different challenges c from the prover. We show that given a witness for the reduced statement, we can extract the unique valid commitment keys \([\mathbf {r}]_1,[\mathbf {s}]_1\).

Let \([\mathbf {r'}_b]_1 = c_b^{-1}[\mathbf {r}_{\frac{1}{2}}]_1 + c_b^{-2}[\mathbf {r}_{\frac{2}{2}}]_1\) be the new commitment keys for two different challenges \(c_0, c_1\). The matrix with rows \((c_b^{-1}, c_b^{-2})\) for \(b\in \left\{ 0,1\right\} \) is invertible, so we can take appropriate linear combination and extract \([\mathbf {r}_{\frac{1}{2}}]_1\), \([\mathbf {r}_{\frac{2}{2}}]_1\). We show that this is the commitment key. First note that since the transcript is accepting, we have that for both reduced keys \([r'_{2^{i-1}+j}]_1= x_i[r'_{j}]_1\) which means that \([r_{2^{i-1}+j}]_1 = x_i[r_{j}]_1\) and \([r_{2^{\nu -1}+2^{i-1}+j}]_1 = x_i[r_{2^{\nu -1}+j}]_1\) for all \(i\le \nu -1, j\le 2^{i}\). In other words \([\mathbf {r}_{\frac{1}{2}}]_1\) and \([\mathbf {r}_{\frac{2}{2}}]_1\) are valid commitment keys w.r.t. the same \([x_1]_2,\ldots ,[x_{\nu -1}]_2\). By the pairing test, we have that \([r'_b]_1 = c_b^{-1} [r]_1 +c_b^{-2}x_\nu [r]_1 = c_b^{-1} [r_{\frac{1}{2},1}]_1 +c_b^{-2}[r_{\frac{2}{2},1}]_1\). This equation holds for both challenges \(c_b\), so it should be the case that \([r_{\frac{1}{2},1}]_1 = [r]\) and \([r_{\frac{2}{2},1}]_1 = x_\nu [r]\), thus the extracted key should be the unique key determined by \([x_1]_1,\ldots ,[x_\nu ]_1\). We argue for \([\mathbf {s}]_1\) in the same way. After extracting the keys the extractor works exactly as in [10] to extract \(\mathbf {a},\mathbf {b}\).

Complexity: It is evident that the protocol needs \(\nu \) rounds. In each round the size of the witness is decreased in half, and we perform a constant number of communication, so we have \(\mathcal {O_\lambda }(\nu )\) communication complexity. The prover in round i performs \(\mathcal {O_\lambda }(2^{\nu +i-1})\) computations, so the prover complexity is \(\mathcal {O_\lambda }\left( \sum _{i=1}^{\nu } 2^{\nu -i+1}\right) = \mathcal {O_\lambda }(2^{\nu })\), while the verifier does \(\mathcal {O_\lambda }(1)\) operations and therefore its complexity is \(\mathcal {O_\lambda }(\nu )\). To be more concrete, the communication complexity is \(8\log n\) elements in \(\mathbb {G}_1\) and \(2\log n\) elements in \(\mathbb {Z}_q\). Prover complexity is dominated by 4 times \(\log n\) multi-exponentiations of sizes \(\frac{n}{2^i}\) in \(\mathbb {G}_1\) to compute the first 4 messages in each round and less than 4n \(\mathbb {G}_1\) exponentiations to compute all the keys. In total, 8n exponentiations in \(\mathbb {G}_1\) with a non optimized implementation of multi-exponentiations. \(\blacksquare \)

6 Updateable Zero Knowledge SNARK for CSAT

We could use the improved inner product argument in a black box way to improve the verification of the zero knowledge protocol of Bootle et al. [10]. However, the source of inefficiency of verifier in [10] is twofold: the linear time needed in verifying the inner product argument, and some computation needed for the specific circuit. The latter is inherent to universal arguments since the verifier needs to, at least, read the circuit. The way to solve this is to add a circuit setup phase so the verifier will need to read the circuit only once. For a universal argument, this circuit setup should involve no secrets, that is, it should be a deterministic algorithm with input the Universal CRS and the circuit description. In this section, we give a sketch of the proof of Bootle et al. and explain where this source of inefficiency occurs in their construction. Then, we show how to overcome this using techniques similar to Sonic [32].

Roughly, the proof of [10] works as follows:

  • \(\mathcal {P}\) commits to its witness (a satisfying wire assignment) \(\mathbf {w}\).

  • \(\mathcal {V}\) issues a random challenge y.

  • \(\mathcal {P}\) computes a polynomial \(t(X)=\mathbf {q}_y(X)^\top (\mathbf {q}_y(X)\circ \mathbf {y^n}+2\mathbf {s}_y(X))+2K\) where \(\mathbf {q}_y(X)\) is a vector of polynomials that depends on \(\mathbf {w}\) and y, and \(\mathbf {s}_y\) on the circuit structure and the challenge y. K is a value that depends on the public input and y. The polynomial t(X) has zero constant coefficient if and only if the circuit is satisfiable w.o.p. over the choice of y. It then sends a commitment to the polynomial t(X) which has constant degree (it can commit to its coefficients using standard Pedersen Commitments).

  • \(\mathcal {V}\) picks and sends a random challenge x to the prover. \(\mathcal {V}\) then computes commitments to \(\mathbf {q}_y(x)\), \(\mathbf {q}_y(x)\circ \mathbf {y^n}\), \(\mathbf {s}_y(x)\) and K. The first two values are computed given a commitment to \(\mathbf {w}\) and utilizing the homomorphic properties of the commitment scheme, and \(\mathbf {s}_y\) is computed by the circuit description. K is computed efficiently by the public input.

  • \(\mathcal {P}\) decommits to \(t_x = t(x)\). \(\mathcal {V}\) checks this claim and the prover and verifier execute an inner product protocol to assert that \(t(x)-2K=\mathbf {q}_y(x)^\top (\mathbf {q}_y(x)\circ \mathbf {y^n}+2\mathbf {s}_y(x))\). This convinces the verifier that the polynomial t(x) has indeed a zero constant term and that it was computed honestly, thus the verifier is convinced about the claim.

The Verifier in [10] is linear in the circuit for three reasons:

  • The inner product protocol in the last step needs linear time.

  • Computing a commitment to \(\mathbf {q}_y(x)\circ \mathbf {y^n}\) needs linear time.

  • Computing a commitment to \(\mathbf {s}_y(x)\) needs linear time.

The first two problems can be addressed easily: the first by using the improved inner product protocol, and the second by utilizing the structure of the \(\mathcal {M}\mathcal {L}_n\) or \(\mathcal {P}\mathcal {W}_n\) distributions to compute the commitment in logarithmic time. For the latter, the key homomorphic properties described in Subsect. 4.1 are utilized to efficiently obtain a commitment to \(\mathbf {q}_y(x)\circ \mathbf {y^n}\) from a commitment to \(\mathbf {q}_y(x)\).

The most subtle point is computing a commitment to \(\mathbf {s}_y(x)\). This depends on the circuit structure and the challenge y. We solve it by applying similar techniques as Sonic [32]. We first preprocess the circuit to impose a specific structure that allows to “commit” to it efficiently. Then we use an aggregated Grand Product protocol which we introduce in the next section to delegate the computation of \(\mathbf {s}_y(x)\) to the prover. We closely follow Sonic in the handling of this issue, but we differ from it in the setting: in this work we delegate computation of a vector commitment while in Sonic the prover decommits to bivariate polynomials of specific form by utilizing a univariate polynomial commitment scheme.

We present on the full version the preprocessing for the general case which only incurs in a constant overhead and so parameters remain optimal (i.e. linear in the size of computation).

6.1 Description of the ZK Argument

We assume that the circuit is preprocessed (see the full version) and has \(n-1\) multiplication gates for \(n=2^\nu \) (the last element will be used as a blinding factor). The size of the public input and output is \(n'\). The circuit is satisfiable iff the following constraints hold

$$ \mathbf {a}\circ \mathbf {b} - \mathbf {c} = \mathbf {0}, $$
$$ \left\{ a_i+\mathbf {w_{a,i}}^\top \mathbf {c} = 0\right\} _{i\in \left\{ {n'+1,\ldots ,n-1}\right\} }, \qquad \left\{ a_i+\mathbf {w_{a,i}}^\top \mathbf {c} -\chi _i = 0\right\} _{i\in \left\{ {1,\ldots ,n'}\right\} }, $$
$$ \left\{ b_i+\mathbf {w_{b,i}}^\top \mathbf {c} = 0\right\} _{i\in \left\{ {1,\ldots ,n-1}\right\} }, $$

where \(\mathbf {x}=(\chi _1,\ldots ,\chi _{n'})\) is the public input and \(\mathbf {w_{a,i}}=\mathbf {0}\) for \(i\in \left\{ 1,\ldots ,n'\right\} \). These equations are satisfied iff the circuit is satisfiable w.r.t. the input \(\mathbf {x}\).

We can aggregate these equations as follows: First, add one extra zero element \(a_n=b_n=0\) to \(\mathbf {a},\mathbf {b},\mathbf {c}\) to make them have \(2^\nu \) elements (these will be used as a blinding factor) and two extra zero constraints \(a_n + \mathbf {0}^\top \mathbf {c}-0=0\), \(b_n + \mathbf {0}^\top \mathbf {c}-0=0\) and set

$$ \begin{aligned} p_m(Y) = (\mathbf {a}\circ \mathbf {b} - \mathbf {c})^\top \mathbf {Y^n} =\mathbf {a}^\top (\mathbf {b}\circ \mathbf {Y^n})- \mathbf {c}^\top \mathbf {Y^n}, \end{aligned} $$
$$ \begin{aligned} p_a(Y)&= \sum _{i=1}^n\left( a_i+\mathbf {w_{a,i}}^\top \mathbf {c}\right) Y^{i-1} -\sum _{i=1}^{n'}\chi _i Y^{i-1} =\mathbf {a}^\top \mathbf {Y^n}+\mathbf {c}^\top \sum _{i=1}^n\mathbf {w_{a,i}}Y^{i-1}-\sum _{i=1}^{n'}\chi _i Y^{i-1}, \end{aligned} $$
$$ \begin{aligned} p_b(Y)&= \sum _{i=1}^n\left( b_i+\mathbf {w_{b,i}}^\top \mathbf {c}\right) Y^{i-1} =\mathbf {b}^\top \mathbf {Y^n}+\mathbf {c}^\top \sum _{i=1}^n\mathbf {w_{b,i}}Y^{i-1}. \end{aligned} $$

Now, let

$$\begin{aligned} p(Y) = p_m(Y) +Y^np_a(Y)+Y^{2n}p_b(Y). \end{aligned}$$

The polynomial p should be identically zero iff the circuit is satisfiable. For a fixed y, we define \(\mathbf {w_a}, \mathbf {w_b}, K\) as follows:

$$\begin{aligned} \mathbf {w_a} = \sum _{i=1}^n\mathbf {w_{a,i}}y^{i-1}, \qquad \mathbf {w_b} = \sum _{i=1}^n\mathbf {w_{b,i}}y^{i-1},\qquad K = -y^{n}\sum _{i=1}^{n'}\chi _iy^{i-1}. \end{aligned}$$
(1)

Note that these values only depend on the circuit, the input and the challenge. We now get

$$ \begin{aligned} p(y) = \mathbf {a}^\top (\mathbf {b}\circ \mathbf {y^n}) + y^n\mathbf {a}^\top \mathbf {y^n} + y^{2n}\mathbf {b}^\top \mathbf {y^n} +\mathbf {c}^\top (y^n\mathbf {w_a}+y^{2n}\mathbf {w_b}-\mathbf {y^n})+K. \end{aligned} $$

We can now construct polynomials

$$\begin{aligned} \mathbf {q}(X) = \mathbf {a}X+ \mathbf {b}X^{-1} + \mathbf {c} X^2 + \mathbf {d}X^3, \end{aligned}$$
$$\begin{aligned} \mathbf {s}(X)={y^n}\mathbf {y^n}X^{-1}+{y^{2n}}\mathbf {y^n}X+(y^n\mathbf {w_a}+y^{2n}\mathbf {w_b}-\mathbf {y^n}) X^{-2}, \end{aligned}$$
$$\begin{aligned} t(X) = \mathbf {q}(X)^\top \left( \mathbf {q}(X)\circ \mathbf {y^n} + 2\mathbf {s}(X)\right) + 2K. \end{aligned}$$

Here \(\mathbf {d}\) is some blinding factor chosen by the prover. The constant term of t(X) equals 2p(y). The prover now can commit to the non-zero coefficients of t using standard Pedersen Commitment and then the verifier issues a new challenge x. The prover reveals t on this value, and the verifier needs to be convinced that the decommitted value is equivalent to computing the value on the right side. To do so, after agreeing on the (commitments of) vectors \(\mathbf {q}(x),\mathbf {q}(x)\circ \mathbf {y^n}+2\mathbf {s}(x)\), they execute an inner product protocol to assert that their inner product is \(t(x)-2K\). If that is the case, the verifier can be confident that the constant term of t(x) is indeed zero, and thus the assignment satisfying. We sketch how the verifier computes the two commitments needed for the inner product protocol.

Let \(\textsf {ck} _1\) be a commitment key defined in the CRS. The commitment to \(\mathbf {q}(x)\) w.r.t. \(\textsf {ck} _1\) can be computed by the homomorphic properties of the commitment scheme and commitments to \(\mathbf {a}\), \(\mathbf {b}\), \(\mathbf {c}\), \(\mathbf {d}\) w.r.t \(\textsf {ck} _1\), which the provers issues in the first round.

Now, a commitment to \(\mathbf {q}(x)\circ \mathbf {y^n}+2\mathbf {s}(x)\) is needed to run the inner product argument. A commitment to \(\mathbf {q}(x)\circ \mathbf {y^n}\), can be computed by the verifier, by deriving a new key \(\textsf {ck} _2\), such that, the commitment to \(\mathbf {q}(x)\) w.r.t. \(\textsf {ck} _1\) is a commitment to \(\mathbf {q}(x)\circ \mathbf {y^n}\) w.r.t. \(\textsf {ck} _2\), as described in Subsect. 4.1. It remains to compute a commitment to \(\mathbf {s}(x)\) w.r.t. \(\textsf {ck} _2\).

Note that \(\mathbf {s}(x)\) only depends on public values and the verifier can compute it, but would need linear time to do so. But if the verifier had commitments to \(\mathbf {w_a},\mathbf {w_b}\), it could compute the commitment to \(\mathbf {s}(x)\) succinctly. To get such commitments, it delegates their computation to the prover. Assuming a preprocessed circuit, its description is given by matrices of the form \(\mathbf {W_a} = \sum _{k=1}^M \mathbf {W_{a,k}}\) where \(\mathbf {W_{a,k}}\) are matrices with, at most, one non-zero value in each column and row (respectively for \(\mathbf {b}\)). It follows by Eq. 1 and the structure of the preprocessed circuit matrix \(\mathbf {W_a}\), that the verifier needs a commitment to

$$\begin{aligned} \mathbf {w_a}= \sum _{i=1}^n\mathbf {w_{a,i}}y^{i-1} = \mathbf {y^n}^\top \mathbf {W_a} = \mathbf {y^{n}}^\top \sum _{k=1}^M \mathbf {W_{a,k}}=\sum _{k=1}^M\sigma _k(\mathbf {y^{n}})\circ \mathbf {w_{a,k}}, \end{aligned}$$

for known vectors \(\mathbf {w_{a,k}}\) and permutations \(\sigma _k\).

We sketch this delegation part in the next section, and provide a full description for it in the full version. A detailed description for the protocol is presented in the full version.

We state next the theorem which is the main result of our work.

Theorem 5

There exists a Public Coin, Honest Verifier Zero Knowledge Argument of Knowledge for CSAT with \(\mathcal {O}(\log \left| \mathcal {C}\right| )\) round complexity, \(\mathcal {O_\lambda }(\left| \mathcal {C}\right| )\) prover complexity, and \(\mathcal {O_\lambda }(\log \left| \mathcal {C}\right| )\) communication and verification complexity under either the \(\mathcal {M}\mathcal {L}_{\left| \mathcal {C}\right| }\)-Find-Rep or the \(\mathcal {P}\mathcal {W}_{\left| \mathcal {C}\right| }\)-Find-Rep assumptions. The argument yields a Universally Updateable NIZK AoK in the random oracle model. In the former case the proof size of an update is \(\mathcal {O_\lambda }(\log \left| \mathcal {C}\right| )\) and in the latter \(\mathcal {O_\lambda }(1)\).

We note that, to achieve updateability, we rely on a NIZK AoK for proving correctness of the updates. One can be flexible in selecting such a NIZK AoK to fine tune efficiency measures. For example, one could combine the \(\mathcal {M}\mathcal {L}_{\left| \mathcal {C}\right| }\)-Find-Rep scheme with [10] as the underline NIZK AoK for updateability to achieve \(\mathcal {O_\lambda }(\log \log \left| \mathcal {C}\right| )\) proof size for proving correctness of an update.

7 Proof of Vector Permutation

We use techniques similar to Sonic to handle the computation regarding the structure of the circuit. We consider only the case of the left-wires for simplicity, i.e. the commitment to \(\mathbf {w_a}\). The problem boils down to the following.

Let \(\textsf {ck} _1 = (\textsf {ck} _1^\mathcal {P},\textsf {ck} _1^\mathcal {V}) = ([\mathbf {r}]_1,([x_1]_2,\ldots ,[x_\nu ]_2))\), be a commitment key defined in the CRS, \([\omega _{a,1}]_1, \ldots , [\omega _{a,M}]_1\) be commitments to vectors \(\mathbf {w_{a,1}},\ldots ,\mathbf {w_{a,M}}\) w.r.t. \(\textsf {ck} _1\), \(\sigma _{a,1}\), \(\ldots ,\) \(\sigma _{a,M}\) be commitments to permutations w.r.t. \(\textsf {ck} _1\) (i.e. \(\textsf {Com} _{\textsf {ck} _1}(v_{a,i})\) where \(v_{a,i}=(\sigma _{a,i}(1), \ldots , \sigma _{a,i}(n))\)). These commitments succinctly encode the circuit structure. Given a value y and a commitment key \(\textsf {ck} _2 = (\textsf {ck} _2^\mathcal {P},\textsf {ck} _2^\mathcal {V}) = (\mathbf {r}\circ \mathbf {y^{-n}},([x_1y^{2^0}]_2, \ldots , [x_\nu y^{2^{\nu -1}}]_2))\), compute with the help of the prover a commitment \([\omega _a]_1\) to the vector \(\mathbf {w_a} = \mathbf {w_{a,1}}\circ \sigma _{a,1}(\mathbf {y^n}) + \ldots + \mathbf {w_{a,M}}\circ \sigma _{a,M}(\mathbf {y^n})\) w.r.t. \(\textsf {ck} _2\), where \(\sigma _{a,i}(\mathbf {y^n}) = (y^{\sigma _{a,1}(1)-1},\ldots ,y^{\sigma _{a,M}(n)-1})\).

Note that, all the commitments that do not depend on the challenge y, can be computed once in a (deterministic) preprocessing phase, and can be reused in multiple proofs. The goal is to allow the verifier to compute the challenge dependent values in logarithmic time. These values are public and a linear time verifier could compute these on its own, though sacrifying succinctness.

The main difference with Sonic is in the setting. Sonic works with permutation polynomials, that is, polynomials of the form \(p_i(X,Y) = \sum a_jX^jY^{\sigma _i(j)}\) and the goal is to decommit to an evaluation in xy for a polynomial \(p(X,Y)=\sum _{i=1}^Mp_i(X,Y)\), that is, the prover wants to reveal p(xy).

In both our work and Sonic, the heart of the protocol is a Permutation Argument which uses a Grand Product Argument [3, 11]. We reduce the Grand Product Argument to an inner product and utilize the inner product argument of Sect. 5, while Sonic, reduces it to verifying a value of a univariate polynomial and utilizes a univariate polynomial commitment scheme.

We next sketch the delegation protocol, and in the following subsection we describe how to reduce the Grand Product to an inner product.

To proceed the prover and the verifier do the following:

  • The prover helps the verifier compute a commitment to \(\mathbf {y^{-n}}\) w.r.t. \(\textsf {ck} _1\), as explained in Subsect. 4.1.

  • The prover provides values \([\upsilon _i]_1\), for \(1\le i\le M\), which it claims are commitments to \(\sigma _{a,i}(\mathbf {y^n})\) w.r.t. \(\textsf {ck} _1\).

  • The prover gives \([\omega _{a,i}]_1\) and claims they are commitments to \(\mathbf {w_{a,i}}\circ \sigma _{a,i}(\mathbf {y^n})\) w.r.t. \(\textsf {ck} _1\).

  • The prover gives \([\omega _{a,i}']_1\) and claims they are commitments to \(\mathbf {w_{a,i}}\circ \sigma _{a,i}(\mathbf {y^n})\circ \mathbf {y^{-n}}\) w.r.t. \(\textsf {ck} _1\). Equivalently, these are commitments to \(\mathbf {w_{a,i}}\circ \sigma _{a,i}(\mathbf {y^n})\) w.r.t. \(\textsf {ck} _2\).

  • The prover and the verifier aggregate and reduce all the above claims to an inner product, which is verified by the improved inner product.

  • The verifier sets \([\omega _a]_1=[\omega _{a,1}']_1+\ldots +[\omega _{a,M}']_1\) as a commitment to \(\mathbf {w_a}\).

We present a sketch for reducing a Grand Product to an inner product in the next section. In the full version we present how we can aggregate all the above claims, and give a description of the protocol.

7.1 Proof of Grand Product

Let \(\textsf {ck} _1 = (\textsf {ck} _1^\mathcal {P},\textsf {ck} _1^\mathcal {V}) = ([\mathbf {r}]_1, ([x_1]_2,\ldots ,[x_\nu ]_2))\), be a commitment key. Also, let \(\mathbf {a_1} = (a_1,a_2,\ldots ,a_n)\) and \(\mathbf {b_1} = (b_1,b_2,\ldots ,b_n)\), and \([\alpha _{1}]_1, [\beta _{1}]_1\) be commitments w.r.t. \(\textsf {ck} _1\). The claim is that \(\prod a_{i} = \prod b_{i}\).

Let \(\mathbf {a_2} = (1,a_1,a_1a_2,\ldots ,a_1\cdots a_{n-1})\) be the vector of partial products and \(\mathbf {a_3}=\mathbf {a_2}\circ \mathbf {a_1}\). We similarly define \(\mathbf {b_2}\), \(\mathbf {b_3}\). One can easily verify that \(a_{3,n} = \prod _{i=1}^na_{i}\) and \(b_{3,n} = \prod _{i=1}^nb_{i}\). To convince the verifier, the prover gives commitments \([\alpha _2]_1\), \([\alpha _3]_1\), \([\beta _2]_1\), \([\beta _3]_1\) to vectors \(\mathbf {a_2}\), \(\mathbf {a_3}\), \(\mathbf {b_2}\), \(\mathbf {b_3}\) w.r.t. \(\textsf {ck} _1\), convince it that they have the right form, and prove that \(a_{3,n}=b_{3,n}\).

We express these requirements as a set of quadratic and linear constraints. We use different variables YW for the various groups of equations for presentational convenience, but we can use just one variable Y and set \(W=Y^k\) for an appropriate k.

$$\begin{aligned} a_{3,n}=b_{3,n}, \end{aligned}$$
$$\begin{aligned} \mathbf {a_1}\circ \mathbf {a_2}&=\mathbf {a_3},\qquad \qquad \qquad \mathbf {b_1}\circ \mathbf {b_2} =\mathbf {b_3},\\ a_{2,1}&=1, \qquad \qquad \qquad \;\qquad b_{2,1}=1, \\ \{a_{2,i}&=a_{3,i-1}\}_{i=2}^n, \qquad \qquad \{b_{2,i}=b_{3,i-1}\}_{i=2}^n. \\ \end{aligned}$$

We show how to reduce these equations to an inner product. We can aggregate the two Hadamard products by setting

$$p_{1}(Y)=\mathbf {a_1}^\top (\mathbf {a_2}\circ \mathbf {\mathbf {Y^n}}) - \mathbf {a_3}^\top \mathbf {Y^n},\quad p_{2}(Y)=\mathbf {b_1}^\top (\mathbf {b_2}\circ \mathbf {\mathbf {Y^n}}) - \mathbf {b_3}^\top \mathbf {Y^n}.$$

We also set

$$\begin{aligned} p_{3}(Y)&= (a_{2,1}-1) + \sum _{i=2}^n(a_{2,i}-a_{3,i-1})Y^{i-1} = \mathbf {a_2}^\top \mathbf {Y^\mathbf {n}} -Y\mathbf {a_3}^\top (\mathbf {Y^n}-Y^{n-1}\mathbf {e_n}) - 1,\\ p_{4}(Y)&= (b_{2,1}-1) + \sum _{i=2}^n(b_{2,i}-b_{3,i-1})Y^{i-1} = \mathbf {b_2}^\top \mathbf {Y^\mathbf {n}} -Y\mathbf {b_3}^\top (\mathbf {Y^n}-Y^{n-1}\mathbf {e_n}) - 1,\\ p_5(Y)&=a_{3,n}-b_{3,n} =\mathbf {e_n}^\top \mathbf {a_3}-\mathbf {e_n}^\top \mathbf {b_3}. \end{aligned}$$

and \(p(Y,W) = p_{1}(Y) + Wp_{2}(Y)+W^2p_{3}(Y)+W^3p_{4}(Y)+W^4p_5(Y)\). The polynomial p is identically zero if and only if the constraints are satisfied. We use the technique of Bootle et al. to embed it in the constant term of a polynomial (similarly to the previous section). The resulting polynomials are

$$\begin{aligned} \mathbf {q}(X)&= \mathbf {a_1}X + \mathbf {a_2}X^{-1} + w\mathbf {b_1}X^{2} + \mathbf {b_2}X^{-2}+\mathbf {a_3}X^3+\mathbf {b_3}X^4, \\ \mathbf {s}(X)&= w^2\mathbf {y^n}X + w^3\mathbf {y^n}X^{2}\\ {}&\qquad + \left( -w^2y(\mathbf {y^n}-y^{n-1}\mathbf {e_n})+w^4\mathbf {e_n}-\mathbf {y^n}\right) X^{-3}\\ {}&\qquad + \left( -w^3y(\mathbf {y^n}-y^{n-1}\mathbf {e_n})-w^4\mathbf {e_n}-w\mathbf {y^n}\right) X^{-4}, \\ t(X)&= \mathbf {q}(X)^\top \left( \mathbf {q}(X)\circ \mathbf {y^n}+2\mathbf {s}(X)\right) -2w^2-2w^3. \end{aligned}$$

The verifier computes a commitment to \(\mathbf {q}(x)\circ \mathbf {y^{n}}\) w.r.t. the new commitment key-defined by the challenge y- as in the previous section. As for the commitment of \(\mathbf {s}(x)\) w.r.t. this new key, however, the verifier can compute it itself: it only needs commitments to \(\mathbf {y^n}\), and to \(y^{n-1}\mathbf {e_n}\) w.r.t. the new key. For the first, it is given a commitment to \([o]_1 = [\mathbf {1^n}^\top \mathbf {r}]_1\) with the initial key, \(\textsf {ck} _1\), and for the second, the last group element of the initial commitment key \([r_n]_1\). The desired commitments w.r.t the new key are \([o]_1\) and \([r_n]_1\). The prover and the verifier then proceed as in the CSAT case. Both \([o]_1\) and \([r_n]_1\) can be precomputed once. The detailed protocol is given in the full version.

Extending for multiple Grand Products. It is straightforward to extend these protocol to prove simultaneously M grand products. Also we can add kM quadratic equations of the form \(\mathbf {c_1}\circ \mathbf {c_2}=\mathbf {c_3}\) to include the remaining constraints needed to compute the commitments needed for the CSAT case. We include the modified system of equation in the full version.

7.2 Proof of Known Permutation

Let \([\mathbf {r}]_1\) be a commitment key of size \(n=2^\nu \), \([\alpha ]_1=[\mathbf {a}^\top \mathbf {r}]_1\), \([\beta ]_1=[\mathbf {b}^\top \mathbf {r}]_1\) and \(\sigma \in S_n\) be a permutation of \(\left\{ 1,\ldots ,n\right\} \). The prover wants to convince the verifier that, for all, i \(b_i=a_{\sigma (i)}\).

In the same spirit as [32], we use the proof system of [3, 11]. The verifier is given as input commitments to \((1,\ldots ,n)\) and \((\sigma (1),\ldots ,\sigma (n))\) denoted as \([\iota ]_1,[\iota ^\pi ]_1\) respectively, and a commitment to \(\mathbf {1^n}\) denoted as [o]. The idea is to reduce this problem to whether two vectors have equal grand products.

The verifier issues two challenges \(t,u\in \mathbb {Z}_q\) and the prover needs to convince the verifier that

$$\begin{aligned} \prod _{i=1}^n (b_i + t\sigma _i - u) = \prod _{i=1}^n (a_i + ti - u) \end{aligned}$$

Viewing these as polynomials in u, if their respective roots \(\left\{ b_i + t\sigma _i\right\} _{i\in \left\{ 1,\ldots ,n\right\} }\) and \(\left\{ a_i + ti\right\} _{i\in \left\{ 1,\ldots ,n\right\} }\) are different, they will be different in a fixed u with overwhelming probability (in a sufficiently large field). Also \(b_i + t\sigma _i\) will be the \(\sigma \) permutation of \(a_i + ti\) only if for all i \(b_i = a_{\sigma (i)}\), except with negligible probability. Thus, proving the grand product of the commitments \([\beta ]_1+t[\iota ^\pi ]_1-u[o]_1\) and \([\alpha ]_1+t[\iota ]_1-u[o]_1\) (which are efficiently computable for the verifier) are equal is enough.

8 Range Proofs with Logarithmic Verifier

We present a new, more efficient aggregated range proof to allow a prover to convince a verifier that it knows openings for perfectly hiding commitments which all are in a range \([0,2^m)\). This has applications in cryptocurrencies such as Monero to privatize transactions. Our approach resembles that of Bulletproofs [12]. The difference is that, in the inner product protocol of [12], the inner product claimed is encoded in the group (i.e. \(\mathbf {a}^\top \mathbf {b}[r]\)) while in our setting the inner product is given as an element of \(\mathbb {Z}_q\). We thus slightly modify things to work in our setting. We exploit two things to achieve logarithmic verification time: the improved inner product argument, and the ability to compute structured commitments of the form \(\mathbf {t^n}\) efficiently (either with the help of the prover or by modifying the commitment key). We present the blueprint of the scheme. Details for the protocol are presented in the full version.

Let \([0,2^m)\) be the desired range and let \(\nu \) be the smallest number such that \(n=2^\nu \ge m\). We first transform the statement to a set of linear and quadratic constraints, and we then construct a suitable inner product statement that holds if and only if the statement is correct w.o.p. Let \([\gamma ]_1=v[1]+\rho _c[r_2]_1\) (\([r_2]\) is used as a blinding factor for the commitment) be a hiding commitment to v. Equivalently, we can consider this as a binding commitment to the n-dimensional vector \(\mathbf {c} = (v, \rho _c, 0, \ldots , 0)\), that is, \([\gamma ]_1 = [\mathbf {c}^\top \mathbf {r}]_1\) for a given commitment key \([\mathbf {r}]_1\). The prover can compute the binary representation of v padding the end with zeros. Denote the padded representation \(\mathbf {a}\). It is enough for the prover to show that:

  • \(\mathbf {a}^\top \mathbf {2^n} = \mathbf {c}^\top \mathbf {0^{n}}\) (note that we define \(\mathbf {0^n}\) to have 1 as its first element).

  • \(\mathbf {a}\) has the first \(m-1\) elements equal to either 0 or 1.

  • \(\mathbf {a}\) has all the other variables equal to zero.

  • \(\mathbf {c}\) has all but the first and second elements zero.

Now let \(b_i=a_i-1\) for \(1\le i<m\), \(b_i=0\) for \(i\ge m\). We express these constraints and aggregate them as follows:

$$\mathbf {a}\circ \mathbf {b}=\mathbf {0},\qquad \left\{ a_i-b_i-1=0\right\} _{i=1}^{m-1},\qquad \mathbf {a}^\top \mathbf {2^n} = \mathbf {c}^\top \mathbf {0^{n}},$$
$$\left\{ a_i=0\right\} _{i=m}^{n},\qquad \left\{ b_i=0\right\} _{i=m}^{n},\qquad \left\{ c_i=0\right\} _{i=3}^{n}.$$

Now let \(\mathbf {Y_1}=(1,\ldots ,Y^{m-2},0,\ldots ,0)\in \mathbb {Z}_q^n\), \(\mathbf {Y_2}=(0,\ldots ,0,Y^{m-1},\ldots ,Y^{n-1})\in \mathbb {Z}_q^n\) and \(\mathbf {Y_3} = (0,0,Y^2,\ldots ,Y^{n-1})\in \mathbb {Z}_q^n\). We define polynomials

$$\begin{aligned} p_1(Y) = \mathbf {a}^\top (\mathbf {b}\circ \mathbf {Y^n}), \end{aligned}$$
$$\begin{aligned} p_2(Y) = \sum _{i=1}^{m-1} (a_i-b_i-1)Y^{i-1} + \sum _{i=m}^na_iY^{i-1} = \mathbf {a}^\top \mathbf {Y^n}-\mathbf {b}^\top \mathbf {Y_1}-\mathbf {1^n}^\top \mathbf {Y_1}, \end{aligned}$$
$$p_3(Y) = \sum _{i=m}^nb_iY^{i-1} = \mathbf {b}^\top \mathbf {Y_2},\quad p_4(Y) = \sum _{i=3}^{n} c_iY^{i-1} = \mathbf {c}^\top \mathbf {Y_3}.$$

The equations hold if and only if

$$\begin{aligned} p(Y) = p_1(Y) + Y^np_2(Y) + Y^{2n}p_3(Y) + Y^{3n}p_4(Y) + Y^{4n}(\mathbf {a}^\top \mathbf {2^n} - \mathbf {c}^\top \mathbf {0^{n}}). \end{aligned}$$

is identically zero. Similarly to the CSAT case, we define for fixed y

$$ \begin{aligned} \mathbf {q}(X)&= \mathbf {a}X + \mathbf {b}X^{-1} + \mathbf {c}X^2 + \mathbf {d}X^3, \\ \mathbf {s}(X)&= \left( y^n\mathbf {y^n}+y^{4n}\mathbf {2^n}\right) X^{-1} + \left( -y^n\mathbf {y_1} + y^{2n}\mathbf {y_2} \right) X \\&\qquad \qquad \quad +\left( y^{3n}\mathbf {y_3}-y^{4n}\mathbf {0^n}\right) X^{-2}, \\ \mathbf {t}(X)&= \mathbf {q}(x)^\top (\mathbf {q}(x)\circ \mathbf {y^n}+2\mathbf {s}(X)) - y^n\mathbf {1^n}^\top \mathbf {y_1}. \end{aligned} $$

Now the constant term of t(X) should be zero for all YX if the constraints are satisfied so we proceed exactly as in the proof system of CSAT except that now it is easier to compute the vector \(\mathbf {s}(x)\). In particular, the verifier can efficiently compute \(\mathbf {s}(x)\), if it has commitments to \(\mathbf {1^n}, (\mathbf {1^{m-1},0})\), \(\mathbf {1^n}-(\mathbf {1^{m-1},0})\) and \((0,0,1,\ldots ,1,1)\). By the key homomorphic properties of the commitment scheme these are commitments to \(\mathbf {y^n},\mathbf {y_1},\mathbf {y_2},\mathbf {y_3}\) w.r.t. the new key. The prover and verifier can efficiently compute a commitment to the vector \(\mathbf {2^n}\) w.r.t to the appropriate key as described in the polynomial commitment section. Finally, note that the inner product \(\mathbf {1^n}^\top \mathbf {Y_1}=1+y+y^2+\ldots +y^{m-2}\) can be efficiently computed by the verifier. Indeed, assuming w.l.o.g. (otherwise apply recursively) that \(m-2+1=2^\mu \) for some \(\mu \) we have that \(1+y+y^2+\ldots +y^{2^{\mu -1}}=(1+y^{2^0})(1+y^{2^1})\cdots (1+y^{2^{\mu -1}})\), and the verifier can compute this in logarithmic time. The full protocol is presented in the full version. We note that the aggregation techniques similar to [12] can be applied in the above.

We state the main theorem for the Range Proof protocol.

Theorem 6

There exists a Public Coin, Honest Verifier Zero Knowledge Argument of Knowledge for the language \(\mathcal {L}_\textsf {RP}= \{m, [\alpha ]_1, [1]_{1,2}, [r_2]_{1,2}, \mid \exists v,\rho _c \text { s.t. }\) \([\alpha ]_1=v[1]_1 + \rho _c[r_2]_1\wedge v<2^m\}\) with \(\log m+\mathcal {O}(1)\) round complexity, \(\mathcal {O_\lambda }(m)\) prover complexity, and \(\mathcal {O_\lambda }(\log m)\) communication and verification complexity under either the \(\mathcal {M}\mathcal {L}_{{m}}\)-Find-Rep or the \(\mathcal {P}\mathcal {W}_{{m}}\)-Find-Rep assumptions. The argument yields a Universally Updateable NIZK AoK in the Random Oracle model. In the former case the proof size of an update is \(\mathcal {O_\lambda }(\log m)\) and in the latter \(\mathcal {O_\lambda }(1)\).