Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Structure-preserving signatures (SPS) [4] are pairing-based signatures where all the messages, signatures and public keys are group elements, verified by testing equality of products of pairings of group elements. They are useful building blocks in modular design of cryptographic protocols, in particular in combination with non-interactive zero-knowledge (NIZK) proofs for algebraic relations in a group [29]. Structure-preserving signatures have found numerous applications in public-key cryptography, such as blind signatures [4, 25], group signatures [4, 25, 27, 28, 40], homomorphic signatures [38], delegatable anonymous credentials [11, 24], compact verifiable shuffles [18], network encoding [9], oblivious transfer [26] and e-cash [13].

A systematic treatment of structure-preserving signatures was initiated by Abe et al. in 2010 [4], building upon previous constructions in [17, 26, 27]. In the past few years, substantial and rapid progress were made in our understanding of the construction of structure-preserving signatures, yielding both efficient schemes under standard assumptions [24, 30] as well as “optimal” schemes in the generic group model with matching upper and lower bounds on the efficiency of the schemes [58, 10]. The three important measures of efficiency in structure-preserving signatures are (i) signature size, (ii) public key size (also per-user public key size for applications like delegatable credentials where we need to sign user public keys), and (iii) number of pairing equations during verification, which in turn affects the efficiency of the NIZK proofs.

One of the main advantages of designing cryptographic protocols starting from structure-preserving signatures is that we can obtain efficient protocols that are secure under standard cryptographic assumptions without the use of random oracles. Ideally, we want to build efficient SPS based on the well-understood k-Lin assumption, which can then be used in conjunction with Groth-Sahai proofs [29] to derive protocols based on the same assumption. In contrast, if we start with SPS that are only secure in the generic group model, then the ensuing protocols would also only be secure in the generic group model, which offer little theoretical or practical benefits over alternative – and typically more efficient and pairing-free – solutions in the random oracle model.

Unfortunately, there is still a big efficiency gap between existing constructions of structure-preserving signatures from the k-Lin assumption and the optimal schemes in the generic group model. For instance, to sign a single group element, the best construction under the SXDH (1-Lin) assumption contains 11 and 21 group elements in the signature and the public key [2], whereas the best construction in the generic group model contains 3 and 3 elements (moreover, this is “tight”) [5]. The goal of this work is to bridge this gap.

1.1 Our Results

We present clean, simple, and improved constructions of structure-preserving signatures via a conceptually novel approach. Our constructions are secure under the k-Lin assumption; under the SXDH assumption (i.e., \(k=1\)), we achieve 7 group elements in the signature.

Previous constructions use fairly distinct techniques, resulting in a large family of schemes with incomparable efficiency and security guarantees. We obtain a family of schemes that simultaneously match – and in many settings, improve upon – the efficiency, assumptions, and security guarantees of all of the previous constructions. Figure 1 summarizes the efficiency of our constructions. (The work of [41] is independent and concurrent.) Our schemes are fully explicit and simple to describe. Furthermore, our schemes have a natural derivation from a symmetric-key setting, and the derivation even extends to a modular and intuitive proof of security.

We highlight two results:

  • For Type III asymmetric pairings, under the SXDH assumption, we can sign a vector of n elements in \(\mathbb {G}_1\) with 7 group elements. This improves upon the prior SXDH-based scheme in [2] which requires 11 group elements, and matches the signature size of the scheme in [4] based on (non-standard) q-type assumptions;

  • For Type I symmetric pairings, under the 2-Lin assumption, we can sign a vector of n elements with 10 group elements, improving upon that in [3] which requires 14 group elements.

In each of these cases, we also improve the size of the public key, as well as the number of equations used in verification. Finally, we extend our schemes to obtain efficient SPS for signing bilateral messages in \(\mathbb {G}_1^{n_1}\times \mathbb {G}_2^{n_2}\) for Type III asymmetric pairings. Particularly, under the SXDH assumption, our scheme can sign messages in \(\mathbb {G}_1^{n_1}\times \mathbb {G}_2^{n_2}\) with 10 group elements in the signature, 4 pairing product equations for verification, and (\(n_1+n_2+8\)) group elements in the public key. Prior SXDH-based schemes from [2] required 14 group elements in the signature, 5 pairing product equations, and \((n_1+n_2+22)\) elements in the public key.

At a high level, our constructions and techniques borrow heavily from the recent work of Kiltz and Wee [36] which addresses a different problem of constructing pairing-based non-interactive zero-knowledge arguments [29, 33]. We exploit recent developments in obtaining adaptively secure identity-based encryption (IBE) schemes, notably the use of pairing groups to “compile” a symmetric-key primitive into an asymmetric-key primitive [14, 19, 44], and the dual system encryption methodology for achieving adaptive security against unbounded collusions [37, 43]. Along the way, we have to overcome a new technical hurdle which is specific to structure-preserving cryptography.

Fig. 1.
figure 1

Structure-preserving signatures for message space \(\mathcal {M}= \mathbb {G}_1^{n_1} \times \mathbb {G}_2^{n_2}\) or \(\mathcal {M}=\mathbb {G}^n\) if \(\mathbb {G}=\mathbb {G}_1=\mathbb {G}_2\). Notation (xy) means x elements in \(\mathbb {G}_1\) and y elements in \(\mathbb {G}_2\). \(\mathsf {RE}(\mathcal {D}_k)\) denotes the number of group elements needed to represent \([\mathbf {A}]\). In case of k-Lin, we have \(\mathsf {RE}(\mathcal {D}_k)=k\). Recall that k-Lin is a special case of \(\mathcal {D}_k\)-MDDH (decisional assumptions) and k-KerLin is a special case of \(\mathcal {D}_k\)-KerMDH (search assumptions), for \(\mathcal {D}_k = {\mathcal {L}}_k\), the linear distribution. For \(k=1\) (SXDH) and \(n_1=1\), we obtain \((| \mathsf {pk}|,|\sigma |, \#\text{ equations }) = (7,7,3)\) for \(\mathcal {M}=\mathbb {G}_1^{n_1}\). For comparison, the known lower bound [5, 6] is \((|\sigma |, \#\text{ equations }) \ge (4,2)\).

1.2 Our Approach: SPS from MACs

We provide an overview of our construction of structure-preserving signatures. Throughout this overview, we fix a pairing group \((\mathbb {G}_1,\mathbb {G}_2,\mathbb {G}_T)\) with \(e: \mathbb {G}_1\times \mathbb {G}_2 \rightarrow \mathbb {G}_T\), and rely on implicit representation notation for group elements, as explained in Sect. 2.1.Footnote 1 As a warm-up, we explain in some detail how to build a one-time structure-preserving signature scheme, following closely the exposition in [36]. While we do not obtain significant improvement in this setting (nonetheless, we do simplify and generalize prior one-time schemes [4]), we believe it already illustrates the conceptual simplicity and novelty of our approach over previous constructions of structure-preserving signatures.

Warm-Up: One-Time SPS. We want to build a one-time signature scheme for signing a vector \([\mathbf {m}]_1 \in \mathbb {G}_1^n\) of group elements. The starting point of our construction is a one-time “structure-preserving” information-theoretic MAC for vectors of group elements. We pick a secret MAC key \(\mathbf {K}\leftarrow _{\textsc {r}}\mathbb {Z}_q^{(n+1) \times (k+1)}\) known to the verifier (\(k \ge 1\) is a parameter of the security assumption), and the MAC on \([\mathbf {m}]_1\) is given by

$$\begin{aligned} \sigma := [(1,\mathbf {m}^{\!\scriptscriptstyle {\top }}) \mathbf {K}]_1 \in \mathbb {G}_1^{1 \times (k+1)} \end{aligned}$$

Verification is straight-forward: check if

$$\begin{aligned} \sigma \mathop {=}\limits ^{?} (1,\mathbf {m}^{\!\scriptscriptstyle {\top }}) \mathbf {K}\end{aligned}$$
(1)

Security follows readily from the fact that for any pair of distinct vectors \(\mathbf {m},\mathbf {m}^* \in \mathbb {Z}_q^n\), the vectors \((1,\mathbf {m}^{\!\scriptscriptstyle {\top }})\) and \((1,\mathbf {m}^{*\top })\) are linearly independent, and therefore the quantities

$$\begin{aligned} (1,\mathbf {m}^{\!\scriptscriptstyle {\top }}) \mathbf {K}, (1,\mathbf {m}^{*\top }) \mathbf {K}\in \mathbb {Z}_q^{(k+1)} \end{aligned}$$

are two independently random values; this holds even if \(\mathbf {m}^* \ne \mathbf {m}\) is chosen adaptively after seeing \((1,\mathbf {m}^\top )\mathbf {K}\).

To achieve public verifiability as is required for a signature scheme, we publish a “partial commitment” to \(\mathbf {K}\) in \(\mathbb {G}_2\) as given by \([\mathbf {A}]_2, [\mathbf {K}\mathbf {A}]_2\), where the choice of \(\mathbf {A}\in \mathbb {Z}_q^{(k+1)\times k}\) is defined by the security assumption. The signature on \([\mathbf {m}]_1\) is the same as the MAC value, and verification is the natural analogue of Eq. (1) with the pairing:

$$\begin{aligned} e(\sigma ,[\mathbf {A}]_2) \mathop {=}\limits ^{?} e([(1,\mathbf {m}^{\!\scriptscriptstyle {\top }})]_1,[\mathbf {K}\mathbf {A}]_2) \end{aligned}$$

As \([\mathbf {A}]_2, [\mathbf {K}\mathbf {A}]_2\) leaks additional information about the secret MAC key \(\mathbf {K}\), we can only prove computational adaptive soundness. In particular, we rely on the \(\mathcal {D}_k\)-KerMDH Assumption [42], which stipulates that given a random \([\mathbf {A}]_2\) drawn from a matrix distribution \(\mathcal {D}_k\), it is hard to find a non-zero \([\mathbf {s}]_1 \in \mathbb {G}_1^{k+1}\) such that \(\mathbf {s}^{\!\scriptscriptstyle {\top }}\mathbf {A}= \mathbf {0}\); this is implied by the \(\mathcal {D}_k\)-MDDH Assumption [22], a generalization of the k-Lin Assumption.Footnote 2 Therefore, for any produced by an efficient adversary,

That is, security of the signature reduces to the security for the MAC, with a little more work to account for the leakage from \(\mathbf {K}\mathbf {A}\). Moreover, adaptive security for the MAC (which is easy to analyze via a purely information-theoretic argument) carries over to adaptive security for the signature.

General SPS. To achieve unforgeability against multiple signature queries, we move from a one-time MAC to a randomized MAC that is secure against multiple queries. As shown in [14, 36], we know that under the \(\mathcal {D}_k\)-MDDH assumption in \(\mathbb {G}_1\), the following construction is a randomized PRF

$$\begin{aligned} \tau \mapsto \bigl (\,[\mathbf {t}^{\!\scriptscriptstyle {\top }}(\mathbf {K}_0 + \tau \mathbf {K}_1)]_1, [\mathbf {t}^{\!\scriptscriptstyle {\top }}]_1\bigr ) \in (\mathbb {G}_1^{1 \times (k+1)})^2, \end{aligned}$$
(2)

where \(\mathbf {K}_0,\mathbf {K}_1\) is the seed and \(\mathbf {t}\) is the randomness. We now use the randomized PRF to additively mask the one-time MAC value \([(1,\mathbf {m}^\top ) \mathbf {K}]_1\). The new randomized MAC takes as input a vector of group elements \([\mathbf {m}]_1 \in \mathbb {G}_1^n\) as before, picks a random tag \(\tau \in \mathbb {Z}_q\) and a fresh \(\mathbf {t}\) and outputs

(3)

where \(\mathbf {K}\) and \(\mathbf {K}_0,\mathbf {K}_1 \leftarrow _{\textsc {r}}\mathbb {Z}_q^{(k+1) \times (k+1)}\) constitute the key. The boxed terms correspond to the additive mask from Eq. (2). We want to argue that an adversary upon obtaining MAC values on Q message vectors \([\mathbf {m}_1]_1,\ldots ,[\mathbf {m}_Q]_1\), cannot compute the MAC value on a new message vector \([\mathbf {m}^*]_1\). First, we may assume that the MAC values on \([\mathbf {m}_1]_1,\ldots ,[\mathbf {m}_Q]_1\) use distinct tags \(\tau _1,\ldots ,\tau _Q\). Then, we consider two cases:

  • case 1: the adversary uses a fresh tag for \([\mathbf {m}^*]_1\). This immediately breaks the pseudorandomness of the security of the construction in Eq. (2);

  • case 2: the adversary reuses tag \(\tau _i\). Again, we know from pseudorandomness that the MAC values on the remaining \(Q-1\) tags do not leak any information \(\mathbf {K}\); therefore, the only leakage about \(\mathbf {K}\) in the Q queries comes from \((1,\mathbf {m}_i^{\!\scriptscriptstyle {\top }}) \mathbf {K}\). We may then rely on the security of the one-time MAC to argue that given only \((1,\mathbf {m}_i^{\!\scriptscriptstyle {\top }}) \mathbf {K}\), it is hard to compute \((1,{\mathbf {m}^*}^{\!\scriptscriptstyle {\top }})\mathbf {K}\).

As before, to obtain a signature scheme, we then publish \([\mathbf {A}]_2, [\mathbf {K}\mathbf {A}]_2, [\mathbf {K}_0 \mathbf {A}]_2, [\mathbf {K}_1\mathbf {A}]_2\) for public verification:

$$\begin{aligned} e(\sigma _1,[\mathbf {A}]_2) \mathop {=}\limits ^{?} e([(1,\mathbf {m}^\top )]_1,[\mathbf {K}\mathbf {A}]_2) \cdot e(\sigma _2, [\mathbf {K}_0\mathbf {A}]_2 \cdot [\tau \mathbf {K}_1\mathbf {A}]_2)\end{aligned}$$

Note that the above verification requires knowledge of \(\tau \in \mathbb {Z}_q\) to compute \([\tau \mathbf {K}_1\mathbf {A}]_2\).

To obtain a structure-preserving signature, we cannot publish \(\tau \in \mathbb {Z}_q\) in the signature. The main technical challenge in this work is to find a way to embed \(\tau \) as a group element that enables both verification and a security reduction. The natural work-around is to add \([\tau \mathbf {K}_1\mathbf {A}]_2\) and \([\tau ]_1\) to the signature, but the proof breaks down. Instead, we add \([\tau ]_2\) and \([\tau \mathbf {t}^{\!\scriptscriptstyle {\top }}]_1\) to the signature to enable verification. This yields a signature with \(3k+4\) group elements.

An Alternative Interpretation. Linearly homomorphic signatures (LHS) [15, 21, 32] are signatures where the messages consist of vectors over group \(\mathbb {G}_1\) such that from any set of signatures on \([\mathbf {m}_i]_1 \in \mathbb {G}_1^n\), one can efficiently derive a signature \(\sigma \) on any element message \([\mathbf {m}]_1 := [\sum \omega _i \mathbf {m}_i]_1\) in the span of \(\mathbf {m}_1, \ldots , \mathbf {m}_Q\). For security, one requires that it is infeasible to produce a signature on a message outside of the span of all previously signed messages. Linearly homomorphic structure preserving signatures (LHSPS) [16, 36, 38] have the additional property that signatures and public keys are all elements of the groups \(\mathbb {G}_1, \mathbb {G}_2, \mathbb {G}_T\), while allowing the use of a tag which is a scalar.

We can construct a SPS with message space \(\mathbb {G}_1^n\) from a LHSPS with message space \(\mathbb {G}_1^{n+1}\) as follows: to sign a message \([\mathbf {m}]_1\), we use a LHSPS to sign the \((n+1)\)-dimensional vector \([1,\mathbf {m}]_1\) on a random tag. Suppose the SPS adversary forges a signature on \([\mathbf {m}^*]_1\). First, we may assume that all the signatures from the signing queries \([\mathbf {m}_1]_1,\ldots ,[\mathbf {m}_Q]_1\) are on distinct tags \(\tau _1,\ldots ,\tau _Q\). Then, we consider two cases:

  • case 1: the adversary uses a fresh tag. Then, security of LHSPS tells us that the adversary can only sign the vector \(\mathbf {0} \in \mathbb {G}_1^{n+1}\), which does not correspond to a valid message in the SPS.

  • case 2: the adversary reuses tag \(\tau _i\). Then, \((1,\mathbf {m}^{*\top })\) must lie in the span of \((1,\mathbf {m}^\top _i)\), which means \(\mathbf {m}^* = \mathbf {m}_i\). Here, we crucially rely on the fact that \(\tau _1,\ldots ,\tau _Q\) are distinct, which ensures that the adversary has seen at most one signature corresponding to \(\tau _i\).

At this point, we can then embed \(\tau \in \mathbb {Z}_q\) as a group element as described earlier. Our constructions may also be viewed as instantiating the above paradigm with the state-of-the-art LHSPS in [36].

1.3 Discussion

Optimality. The linearity in the verification equation of SPS poses severe restrictions on the efficiency of such constructions. In both Type I and III bilinear groups, it was proved in [5, 8] that any fully secure SPS requires at least 2 verification equations, at least 3 group elements, the 3 elements not all the same group (for Type III asymmetric pairings).In fact, [5] shows the above lower bounds by giving attacks the weaker security model of unforgeability against two random message queries. Furthermore, one-time secure SPS against random message attack (\(\mathsf {RMA}\)) in Type I bilinear groups require at least 2 group elements and 2 equations [8].Furthermore, SPSs in Type III bilinear groups require at least 4 group elements [6] for unforgeability against adaptive chosen message attack under non-interactive assumptions (such as k-Lin).

Interestingly, for one-time \(\mathsf {RMA}\)-security, we can match the lower bounds. By combining our main result on the one-time \(\mathsf {CMA}\)-secure SPS and the techniques used in [36] to obtain shorter QANIZK, we obtain an optimal \(\mathsf {RMA}\)-secure one-time SPS (Sect. 5). In Type III asymmetric groups, under the SXDH assumption, signatures requires 1 group element and 1 verification equation which is clearly optimal; in Type I symmetric groups, under the 2-Lin assumption, our scheme requires 2 elements and 2 verification equations, matching the lower bound for one-time \(\mathsf {RMA}\)-secure SPS from [8].

Comparison with Previous Approaches. The prior works of Abe, et al. [2, 3] presented two generic approaches for constructing SPS from SXDH and 2-Lin assumptions: both constructions combine a structure-preserving one-time signature and random-message secure signatures ala [23], with slightly different syntax and security notions for the two underlying building blocks; the final signature is the concatenation of the two underlying signatures. Our construction has a similar flavor in that we combine a one-time MAC with a randomized PRF. However, we are able to exploit the common structure in both building blocks to compress the output; interestingly, working with the matrix Diffie-Hellman framework [22] makes it easier to identity such common structure. In particular, the output length of the randomized MAC with unbounded security is that of the PRF and not the sum of the output lengths of the one-time MAC and the PRF; this is akin to combining a one-time signature and a random-message secure signature in such a way that the combined signature size is that of the latter rather than the sum of the two.

Signatures from IBE. While our construction of signatures exploits techniques from the literature on IBE, it is quite different from the well-known Naor’s derivation of a signature scheme from an IBE. There, the signature on a message \(m \in \mathbb {Z}_q\) corresponds to an IBE secret key for the identity m. This approach seems to inherently fail for structure-preserving signatures as all known pairings-based IBE schemes need to treat the identity as a scalar. In our construction, a signature on \([\mathbf {m}]_1\) also corresponds to an IBE secret key: the message vector (specifically, a one-time MAC applied to the message vector) is embedded into the master secret key component of an IBE, and a fresh random tag \(\tau \in \mathbb {Z}_q\) is chosen and used as the identity. The idea of embedding \([\mathbf {m}]_1\) into the master secret key component of an IBE also appeared in earlier constructions of linearly homomorphic structure-preserving schemes [36, 38, 39]; a crucial difference is that these prior constructions allow the use of a scalar tag in the signature.

Towards Shorter SPS? One promising approach to get even shorter SPS against adaptive chosen message attack by using our approach is to improve upon the underlying MAC in the computational core lemma (Lemma 3). Currently, the MAC achieves security against chosen message attacks, whereas it suffices to use one that is secure against random message attacks. Saving one group element in this MAC would likely yield a saving of two group elements in the SPS, which would in turn yield a SXDH-based signature with 5 group elements. Note that the state-of-the-art standard signature from SXDH contains 4 group elements [20]. Together with existing lower bounds for SPS, this indicates a barrier of 5 group elements for SXDH-based SPS; breaking this barrier would likely require improving upon the best standard signatures from SXDH.

Perspective. As noted at the beginning of the introduction, structure-preserving signatures have been a target of intense scrutiny in recent years. We presented a conceptually different yet very simple approach for building structure-preserving signatures. We are optimistic that our approach will yield further insights into structure-preserving signatures as well as concrete improvements to the numerous applications that rely on such signatures.

2 Definitions

Notation. If \(\mathbf {x} \in \mathcal {B}^n\), then \(|\mathbf {x}|\) denotes the length n of the vector. Further, \(x \leftarrow _{\textsc {r}}\mathcal {B}\) denotes the process of sampling an element x from set \(\mathcal {B}\) uniformly at random. If \(\mathbf {{A}} \in \mathbb {Z}_q^{n \times k}\) is a matrix with \(n>k\), then \(\overline{\mathbf {{A}}} \in \mathbb {Z}_q^{k \times k}\) denotes the upper square matrix of \(\mathbf {{A}}\) and then \(\underline{\mathbf {{A}}} \in \mathbb {Z}_q^{(n-k) \times k}\) denotes the remaining \(n-k\) rows of \(\mathbf {{A}}\). We use \( span ()\) to denote the column span of a matrix.

2.1 Pairing Groups

Let \(\mathsf {GGen}\) be a probabilistic polynomial time (PPT) algorithm that on input \(1^\lambda \) returns a description \(\mathcal {PG}=(\mathbb {G}_1,\mathbb {G}_2,\mathbb {G}_T,q,g_1,g_2,e)\) of asymmetric pairing groups where \(\mathbb {G}_1\), \(\mathbb {G}_2\), \(\mathbb {G}_T\) are cyclic groups of order q for a \(\lambda \)-bit prime q, \(g_1\) and \(g_2\) are generators of \(\mathbb {G}_1\) and \(\mathbb {G}_2\), respectively, and \(e: \mathbb {G}_1 \times \mathbb {G}_2\) is an efficiently computable (non-degenerate) bilinear map. Define \(g_T:=e(g_1, g_2)\), which is a generator in \(\mathbb {G}_T\).

We use implicit representation of group elements as introduced in [22]. For \(s \in \{1,2,T\}\) and \(a \in \mathbb {Z}_q\), define \([a]_s = g_s^a \in \mathbb {G}_s\) as the implicit representation of a in \(\mathbb {G}_s\). More generally, for a matrix \(\mathbf {{A}} = (a_{ij}) \in \mathbb {Z}_q^{n\times m}\) we define \([\mathbf {{A}}]_s\) as the implicit representation of \(\mathbf {{A}}\) in \(\mathbb {G}_s\):

$$[\mathbf {{A}}]_s := \begin{pmatrix} g_s^{a_{11}} &{} ... &{} g_s^{a_{1m}}\\ &{} &{} \\ g_s^{a_{n1}}&{} ... &{} g_s^{a_{nm}} \end{pmatrix} \in \mathbb {G}_s^{n \times m}$$

We will always use this implicit notation of elements in \(\mathbb {G}_s\), i.e., we let \([a]_s \in \mathbb {G}_s\) be an element in \(\mathbb {G}_s\). Note that from \([a]_s \in \mathbb {G}_s\) it is generally hard to compute the value a (discrete logarithm problem in \(\mathbb {G}_s\)). Further, from \([b]_{T}\in \mathbb {G}_{T}\) it is hard to compute the value \([b]_1 \in \mathbb {G}_1\) and \([b]_2 \in \mathbb {G}_2\) (pairing inversion problem). Obviously, given \([a]_s \in \mathbb {G}_s\) and a scalar \(x \in \mathbb {Z}_q\), one can efficiently compute \([ax]_s \in \mathbb {G}_s\). Further, given \([a]_1, [a]_2\) one can efficiently compute \([ab]_T\) using the pairing e. For two matrices \(\mathbf {A}, \mathbf {B}\) with matching dimensions define \(e([\mathbf {A}]_1, [\mathbf {B}]_2):= [\mathbf {A}\mathbf {B}]_T \in \mathbb {G}_T\).

2.2 Matrix Diffie-Hellman Assumption

We recall the definitions of the Matrix Decision Diffie-Hellman (MDDH) and the Kernel Diffie-Hellman assumptions [22, 42].

Definition 1

(Matrix Distribution). Let \(k\in {{\mathbb N}}\). We call \(\mathcal {D}_{k}\) a matrix distribution if it outputs matrices in \(\mathbb {Z}_q^{(k+1)\times k}\) of full rank k in polynomial time.

Without loss of generality, we assume the first k rows of \(\mathbf {{A}} \leftarrow _{\textsc {r}}\mathcal {D}_{k}\) form an invertible matrix. The \(\mathcal {D}_k\)-Matrix Diffie-Hellman problem is to distinguish the two distributions \(([\mathbf {{A}}], [\mathbf {{A}} \mathbf {w}])\) and \(([\mathbf {{A}}],[\mathbf {u}])\) where \(\mathbf {{A}}\leftarrow _{\textsc {r}}\mathcal {D}_{k}\), \(\mathbf {w}\leftarrow _{\textsc {r}}\mathbb {Z}_q^k\) and \(\mathbf {u}\leftarrow _{\textsc {r}}\mathbb {Z}_q^{k+1}\).

Definition 2

( \(\mathcal {D}_{k}\) -Matrix Diffie-Hellman Assumption \(\mathcal {D}_{k}\)-MDDH ). Let \(\mathcal {D}_{k}\) be a matrix distribution and \(s \in \{1,2,T\}\). We say that the \(\mathcal {D}_{k}\)-Matrix Diffie-Hellman (\(\mathcal {D}_{k}\)-MDDH) Assumption holds relative to \(\mathsf {GGen}\) in group \(\mathbb {G}_s\) if for all PPT adversaries \(\mathcal {A}\),

$$\mathbf {Adv}^\mathrm {mddh}_{\mathcal {D}_{k},\mathsf {GGen}}(\mathcal {A}) :=| \Pr [\mathcal {A}(\mathcal {G},[\mathbf {{A}}]_s, [\mathbf {{A}} \mathbf {w}]_s)=1]-\Pr [\mathcal {A}(\mathcal {G},[\mathbf {{A}}]_s, [\mathbf {u}]_s) =1] |= \mathrm {negl}(\lambda ),$$

where the probability is taken over \(\mathcal {G}\leftarrow _{\textsc {r}}\mathsf {GGen}(1^\lambda )\), \(\mathbf {{A}} \leftarrow _{\textsc {r}}\mathcal {D}_{k}, \mathbf {w}\leftarrow _{\textsc {r}}\mathbb {Z}_q^k, \mathbf {u}\leftarrow _{\textsc {r}}\mathbb {Z}_q^{k+1}\).

The Kernel-Diffie-Hellman assumption \(\mathcal {D}_{k}\)-KerMDH [42] is a natural computational analogue of the \(\mathcal {D}_k\)-MDDH Assumption.

Definition 3

( \(\mathcal {D}_{k}\) -Kernel Diffie-Hellman Assumption \(\mathcal {D}_{k}\)-KerMDH ). Let \(\mathcal {D}_{k}\) be a matrix distribution and \(s \in \{1,2\}\). We say that the \(\mathcal {D}_{k}\)-Kernel Diffie-Hellman (\(\mathcal {D}_{k}\)-KerMDH) Assumption holds relative to \(\mathsf {GGen}\) in group \(\mathbb {G}_s\) if for all PPT adversaries \(\mathcal {A}\),

$$\mathbf {Adv}^\mathrm {kmdh}_{\mathcal {D}_{k},\mathsf {GGen}}(\mathcal {A}) := \Pr [ \mathbf {c}^{\!\scriptscriptstyle {\top }}\mathbf {A}= \mathbf {0} \wedge \mathbf {c}\ne \mathbf {0} \mid [\mathbf {c}]_{3-s} \leftarrow _{\textsc {r}}\mathcal {A}(\mathcal {G},[\mathbf {{A}}]_s)] = \mathrm {negl}(\lambda ),$$

where the probability is taken over \(\mathcal {G}\leftarrow _{\textsc {r}}\mathsf {GGen}(1^\lambda )\), \(\mathbf {{A}} \leftarrow _{\textsc {r}}\mathcal {D}_{k}\).

Note that we can use a non-zero vector in the kernel of \(\mathbf {A}\) to test membership in the column space of \(\mathbf {A}\). This means that the \(\mathcal {D}_k\)-KerMDH assumption is a relaxation of the \(\mathcal {D}_k\)-MDDH assumption, as captured in the following lemma from [42].

Lemma 1

For any matrix distribution \(\mathcal {D}_k\), \(\mathcal {D}_k\)-MDDH \(\Rightarrow \) \(\mathcal {D}_k\)-KerMDH.

For each \(k \ge 1\), [22, 42] specify distributions \({\mathcal {L}}_k\), \(\mathcal {SC}_k\), \(\mathcal {U}_k\) (and others) such that the corresponding \(\mathcal {D}_k\)-MDDH and \(\mathcal {D}_k\)-KerMDH assumptions are generically secure in bilinear groups and form a hierarchy of increasingly weaker assumptions.

$$ \mathcal {SC}_{k}: \mathbf {{A}} = \left( {\begin{matrix} 1 &{} 0 &{} 0 &{} \ldots &{} 0 \\ a &{} 1 &{} 0 &{} \ldots &{} 0 \\ 0 &{} a &{} 1 &{} &{} 0 \\ 0 &{} 0 &{} a &{} &{} 0 \\ \tiny {\vdots } &{} &{} \tiny {\ddots } &{} \tiny {\ddots } &{} \\ 0 &{} 0 &{} 0 &{}\ldots &{} a \end{matrix}} \right) , \; \mathcal {L}_{k}:\mathbf {{A}} = \left( {\begin{matrix} 1 &{} 1 &{} 1 &{} \ldots &{} 1 \\ a_1 &{} 0 &{} 0 &{} \ldots &{} 0 \\ 0 &{} a_2 &{} 0 &{} \ldots &{} 0\\ 0 &{} 0 &{} a_3 &{} &{} 0\\ \tiny {\vdots } &{} &{} \tiny {\ddots } &{} \tiny {\ddots } &{} \\ 0 &{} 0 &{} 0 &{} \ldots &{} a_{k}\\ \end{matrix}} \right) , \; \mathcal {U}_{k}: \mathbf {{A}} = \left( {\begin{matrix} a_{1,1} &{} \ldots &{} a_{1,k} \\ \tiny {\vdots } &{} \ddots &{} \tiny {\vdots } \\ a_{k+1,1} &{} \ldots &{} a_{k+1,k} \end{matrix}} \right) , $$

where \(a,a_i,a_{i,j} \leftarrow \mathbb {Z}_q\). We define the representation size \(\mathsf {RE}(\mathcal {D}_k)\) of a given matrix distribution \(\mathcal {D}_k\) as the minimal number of group elements needed to represent \([\mathbf {{A}}]_s\), where \(\mathbf {{A}} \leftarrow _{\textsc {r}}\mathcal {D}_k\). Then \(\mathsf {RE}(\mathcal {SC}_k)=1\), \(\mathsf {RE}(\mathcal {L}_k)=k\) and \(\mathsf {RE}(\mathcal {U}_k)=k(k+1)\). As shown in [22], \(\mathcal {SC}_k \text{- }\mathsf{MDDH}\) offers the same security guarantees as \({\mathcal {L}}_k\)-MDDH (k-Linear Assumption of [31]), while having the advantage of a more compact representation. We define \(k \text{- }\mathsf{Lin}:={\mathcal {L}}_k\)-MDDH and \(k \text{- }\mathsf{KerLin}:={\mathcal {L}}_k\)-KerMDH. Note that \(2 \text{- }\mathsf{KerLin} =\mathsf {SDP}\) (Simultaneous Double Pairing Assumption of [17]). The relations between the different assumptions for \(\mathcal {D}_k={\mathcal {L}}_k\) are as follows:

figure a

2.3 Structure-Preserving Signatures

Let \(\mathsf {par}\) be some parameters that contain a pairing group \(\mathcal {PG}\). In a structure-preserving signature (SPS) [4], both the messages and signatures are group elements, verification proceeds via a pairing-product equation.

Definition 4

(Structure-preserving signature). A structure-preserving signature scheme \(\mathsf {SPS}\) is defined as a triple of probabilistic polynomial time (PPT) algorithms \(\mathsf {SPS}= (\mathsf {Gen}, \mathsf {Sign}, \mathsf {Verify})\):

  • The probabilistic key generation algorithm \(\mathsf {Gen}(\mathsf {par})\) returns the public/secret key \(( \mathsf {pk}, \mathsf {sk})\), where \( \mathsf {pk}\in \mathbb {G}^{n_{ \mathsf {pk}}}\) for some \(n_{ \mathsf {pk}} \in {{\mathrm{poly}}}(\lambda )\). We assume that \( \mathsf {pk}\) implicitly defines a message space \(\mathcal {M}:=\mathbb {G}^n\) for some \(n\in {{\mathrm{poly}}}(\lambda )\).

  • The probabilistic signing algorithm \(\mathsf {Sign}( \mathsf {sk},[\mathbf {m}])\) returns a signature \(\sigma \in \mathbb {G}^{n_{\sigma }}\) for \(n_{\sigma } \in {{\mathrm{poly}}}(\lambda )\).

  • The deterministic verification algorithm \(\mathsf {Verify}( \mathsf {pk}, [\mathbf {m}],\sigma )\) only consists of pairing product equations and returns 1 (accept) or 0 (reject).

(Perfect correctness.) for all \(( \mathsf {pk}, \mathsf {sk})\leftarrow _{\textsc {r}}\mathsf {Gen}(\mathsf {par})\) and all messages \([\mathbf {m}]\in \mathcal {M}\) and all \(\sigma \leftarrow _{\textsc {r}}\mathsf {Sign}( \mathsf {sk},[\mathbf {m}])\) we have \(\mathsf {Verify}( \mathsf {pk},[\mathbf {m}],\sigma )=1\).

Definition 5

(Unforgeablility against chosen message attack). To an adversary \(\mathcal {A}\) and \(\mathsf {SPS}\) we associate the advantage function

$$\begin{aligned} \mathbf {Adv}^\mathrm {cma}_\mathsf {SPS}(\mathcal {A}) := \Pr \left[ [\mathbf {m}^*] \notin {\mathcal Q}_{\mathrm {msg}}\wedge \mathsf {Verify}( \mathsf {pk},[\mathbf {m}^*],\sigma ^*)=1 \left| \begin{array}{l} ( \mathsf {pk}, \mathsf {sk})\leftarrow _{\textsc {r}}\mathsf {Gen}(\mathsf {par}) \\ ([\mathbf {m}^*],\sigma ^*) \leftarrow _{\textsc {r}}\mathcal {A}^{\mathsf {Sign}\mathsf {O}(\cdot )}( \mathsf {pk}) \end{array} \right. \right] , \end{aligned}$$

where \(\mathsf {Sign}\mathsf {O}([\mathbf {m}])\) runs \(\sigma \leftarrow _{\textsc {r}}\mathsf {Sign}( \mathsf {sk},[\mathbf {m}])\), adds the vector \([\mathbf {m}]\) to \({\mathcal Q}_{\mathrm {msg}}\) (initialized with \(\emptyset \)) and returns \(\sigma \) to \(\mathcal {A}\). \(\mathsf {SPS}\) is said to be (unbounded) CMA-secure if for all PPT adversaries \(\mathcal {A}\), \(\mathbf {Adv}^\mathrm {cma}_\mathsf {SPS}(\mathcal {A})\) is negligible. \(\mathsf {SPS}\) is said to be one-time CMA-secure with corresponding advantage function \(\mathbf {Adv}^{\mathrm {ot}\text{- }\mathrm {cma}}_\mathsf {SPS}(\mathcal {A})\), if \(\mathcal {A}\) is restricted to make at most one query to oracle \(\mathsf {Sign}\mathsf {O}\).

3 One-Time CMA-Secure SPS

The scheme is given in Fig. 2 and its parameters are:

$$\begin{aligned} | \mathsf {pk}| = (n+1)k+\mathsf {RE}(\mathcal {D}_k),\qquad |\sigma | =k + 1. \end{aligned}$$

As defined in Sect. 2.2, \(\mathsf {RE}(\mathcal {D}_k)\) denotes the number of group elements needed to represent \([\mathbf {A}]_s\), where \(\mathbf {{A}} \leftarrow _{\textsc {r}}\mathcal {D}_k\). For k-Lin, we achieve 2 group elements in the signature for \(k=1\) and 3 group elements for \(k=2\). Moreover, we note that the verification needs k pairing product equations: for \(e(\sigma , \left[ \mathbf {A}\right] _{2}) = e([(1,\mathbf {m})]_1,[\mathbf {C}]_2)\) we need to pair the vector \(\sigma \) with every column of \(\left[ \mathbf {A}\right] _{2}\) and thus this check needs k pairing product equations.

Fig. 2.
figure 2

One-time \(\mathsf {CMA}\)-secure structure-preserving signature \(\mathsf {SPS}_\mathsf {ot}\) with message-space \(\mathcal {M}= \mathbb {G}_1^n\).

We will exploit the following lemma in the analysis of our scheme. Informally, the lemma says that \(\mathbf {m}\mapsto (1,\mathbf {m}^{\!\scriptscriptstyle {\top }})\mathbf {K}\) is a secure information-theoretic one-time MAC even if the adversary first sees \((\mathbf {A},\mathbf {K}\mathbf {A})\).

Lemma 2

(Core lemma for adaptive soundness). Let nk be integers. For any \(\mathbf {A}\in \mathbb {Z}_q^{(k+1) \times k}\) and any (possibly unbounded) adversary \(\mathcal {A}\),

$$\begin{aligned} \Pr \left[ \mathbf {m}^* \ne \mathbf {m}\wedge \mathbf {z}^\top = (1,{\mathbf {m}^*}^{\!\scriptscriptstyle {\top }}) \mathbf {K}\left| \begin{array}{l} \mathbf {K}\leftarrow _{\textsc {r}}\mathbb {Z}_q^{(n+1) \times (k+1)}\\ (\mathbf {z}, \mathbf {m}^*) \leftarrow _{\textsc {r}}\mathcal {A}^\mathcal{O(\cdot )}(\mathbf {{A}}, \mathbf {K}\mathbf {A}) \end{array}\right. \right] \le \frac{1}{q}, \end{aligned}$$
(4)

where \(\mathcal{O}(\mathbf {m}\in \mathbb {Z}_q^n)\) returns \((1,\mathbf {m}^\top ) \mathbf {K}\) and \(\mathcal {A}\) only gets a single call to \(\mathcal{O}\).

This lemma can be seen as an adaptive version of a special case of [36, Lemma 2] in that we fix \(t=1\), \(\mathbf {M}\) to be the matrix \((1,\mathbf {m}^{\!\scriptscriptstyle {\top }}) \in \mathbb {Z}_q^{1 \times (n+1)}\), and we use the fact that if \(\mathbf {m}^* \ne \mathbf {m}\), then \((1,\mathbf {m}^*) \notin span (\mathbf {M})\). In our adaptive version, \(\mathbf {m}\) may depend on \(\mathbf {K}\mathbf {A}\) but the proof is essentially the same as in [36]. Lemma 2 implies the security of \(\mathsf {SPS}_\mathsf {ot}\). Formal proofs of Lemma 2 and Theorem 1 are given in [35].

Theorem 1

Under the \(\mathcal {D}_k\)-KerMDH Assumption in \(\mathbb {G}_2\), \(\mathsf {SPS}_\mathsf {ot}\) from Fig. 2 is a one-time \(\mathsf {CMA}\)-secure structure-preserving signature scheme.

4 Unbounded CMA-Secure SPS

4.1 Computational Core Lemma

We present a variant of the computational core lemma from [36, Lemma 3].

Lemma 3

(Computational core lemma for unbounded \(\mathsf {CMA}\) -security). For all adversaries \(\mathcal {A}\), there exists an adversary \(\mathcal {B}\) with \(\mathbf {T}(\mathcal {A}) \approx \mathbf {T}(\mathcal {B})\) and

$$\begin{aligned}&\Pr \left[ \begin{array}{c} \tau ^* \notin {\mathcal Q}_{\mathrm {tag}}\\ \wedge \; b' = b\\ \end{array} \;\left| \; \begin{array}{l} \mathbf {A}, \mathbf {B}\leftarrow _{\textsc {r}}\mathcal {D}_k\\ \mathbf {K}_0, \mathbf {K}_1 \leftarrow _{\textsc {r}}\mathbb {Z}_q^{(k+1) \times (k+1)}\\ (\mathbf {{P}}_0, \mathbf {{P}}_1) := (\mathbf {B}^{\!\scriptscriptstyle {\top }}\mathbf {K}_0, \mathbf {B}^{\!\scriptscriptstyle {\top }}\mathbf {K}_1)\\ \mathsf {pk}:= ([\mathbf {{P}}_0]_1, [\mathbf {{P}}_1]_1, [\mathbf {B}]_1, \mathbf {K}_0\mathbf {A}, \mathbf {K}_1\mathbf {A}, \mathbf {A})\\ b \leftarrow _{\textsc {r}}\{0,1\}; b' \leftarrow _{\textsc {r}}\mathcal {A}^{\mathcal{O}{_b}(\cdot ), \mathcal O^*(\cdot )}( \mathsf {pk}) \end{array}\right. \right] \\\le & {} \frac{1}{2} + 2Q \cdot \mathbf {Adv}^\mathrm {mddh}_{\mathcal {D}_{k},\mathsf {GGen}}(\mathcal {B}) + Q/q, \end{aligned}$$

where

  • \(\mathcal{O}_b(\tau )\) returns \((\left[ b\mu {\mathbf {a}^{\perp }}+ \mathbf {r}^{\!\scriptscriptstyle {\top }}(\mathbf {{P}}_0 + \tau \mathbf {{P}}_1)\right] _{1}, \left[ \mathbf {r}^{\!\scriptscriptstyle {\top }}\mathbf {B}^{\!\scriptscriptstyle {\top }}\right] _{1}) \in (\mathbb {G}_1^{1 \times (k+1)})^2\) with \(\mu \leftarrow _{\textsc {r}}\mathbb {Z}_q, \mathbf {r}\leftarrow _{\textsc {r}}\mathbb {Z}_q^k\) and adds \(\tau \) to \({\mathcal Q}_{\mathrm {msg}}\). Here, \({\mathbf {a}^{\perp }}\) is non-zero vector in \(\mathbb {Z}_q^{1 \times (k+1)}\) that satisfies \({\mathbf {a}^{\perp }}\mathbf {A}= \mathbf {0}\).

  • . \(\mathcal {A}\) only gets a single call \(\tau ^*\) to \(\mathcal{O^*}\).

  • Q is the number of queries \(\mathcal {A}\) makes to \(\mathcal{O}_b\).

Compared to [36, Lemma 3], oracle \(\mathcal{O}^*\) is modified as follows. Instead of getting tag \(\tau ^*\) and returning \(\mathbf {K}_0 + \tau ^* \mathbf {K}_1\) in the clear, both the query and the output are encoded in \(\mathbb {G}_2\). The change is boxed in the lemma. It is straight-forward to check that the proof goes through as in [36]:

  • the security reduction knows \(\mathbf {K}_0,\mathbf {K}_1\), and therefore it can compute \([\mathbf {K}_0 + \tau ^*\mathbf {K}_1]_2\) given \([\tau ^*]_2\);

  • the quantity \([\mathbf {K}_0 + \tau ^*\mathbf {K}_1]_2\) does not reveal any additional information about \(\mathbf {K}_0,\mathbf {K}_1\) beyond \(\mathbf {K}_0 + \tau ^* \mathbf {K}_1\).

For completeness, a formal proof of the lemma is given in [35].

4.2 Our Scheme

The parameters are:

$$\begin{aligned} | \mathsf {pk}| = (n+1)k+2(k+1)k+\mathsf {RE}(\mathcal {D}_k),\qquad |\sigma | = (3(k+1), 1), \end{aligned}$$

where notation (xy) represents x elements in \(\mathbb {G}_1\) and y elements in \(\mathbb {G}_2\). For k-Lin, this yields \((n+6,(6,1))\) for \(k=1\) and \((2n+16,(9,1))\) for \(k=2\). Moreover, we note that the verification needs \(2k+1\) pairing product equations: for \(e(\sigma _1, \left[ \mathbf {A}\right] _{2}) = e([(1,\mathbf {m})]_1,[\mathbf {C}]_2) \cdot e(\sigma _2, [\mathbf {C}_0]_2) \cdot e(\sigma _3, [\mathbf {C}_1]_2)\) we need to pair the vector \(\sigma _1\) with every column of \(\left[ \mathbf {A}\right] _{2}\) and thus this check needs k pairing product equations; and for \( e(\sigma _2, [\tau ]_2) = e(\sigma _3, [1]_2)\) we need to pair every element from \(\sigma _2\) with \([\tau ]_2 \in \mathbb {G}_2\) and thus this requires \(k+1\) pairing product equations.

Fig. 3.
figure 3

Structure-preserving signature \(\mathsf {SPS}_\mathsf {full}\) with message-space \(\mathcal {M}= \mathbb {G}_1^n\).

Theorem 2

Under the \(\mathcal {D}_k\)-MDDH Assumption in \(\mathbb {G}_1\) and \(\mathcal {D}_k\)-KerMDH Assumption in \(\mathbb {G}_2\), \(\mathsf {SPS}_\mathsf {full}\) from Fig. 3 is an unbounded \(\mathsf {CMA}\)-secure structure-preserving signature scheme.

Proof

Perfect correctness and the structure-preserving property are straight-forward. We proceed to establish the unbounded \(\mathsf {CMA}\)-security. We will show that for any adversary \(\mathcal {A}\) that makes at most Q signing queries, there exists adversaries \(\mathcal {B}_0, \mathcal {B}_1\) with \(\mathbf {T}(\mathcal {A}) \approx \mathbf {T}(\mathcal {B}_0) \approx \mathbf {T}(\mathcal {B}_1)\) and

$$\begin{aligned} \mathbf {Adv}^\mathrm {cma}_{\mathsf {SPS}_\mathsf {full}}(\mathcal {A}) \le \mathbf {Adv}^\mathrm {kmdh}_{\mathcal {D}_{k},\mathsf {GGen}}(\mathcal {B}_0) + 2Q(Q+1) \cdot \mathbf {Adv}^\mathrm {mddh}_{\mathcal {D}_{k},\mathsf {GGen}}(\mathcal {B}_1) + (Q+1)^2/q + Q^2/2q. \end{aligned}$$
(5)

We proceed via a series of games and we use \(\mathbf {Adv}_i\) to denote the advantage of \(\mathcal {A}\) in Game i.

  • Game 0. This is the \(\mathsf {CMA}\)-security experiment from Definition 5.

    $$\mathbf {Adv}^\mathrm {cma}_{\mathsf {SPS}_\mathsf {full}}(\mathcal {A}) = \mathbf {Adv}_0$$
  • Game 1. Switch \(\mathsf {Verify}\) to \(\mathsf {Verify}^*\):

    figure b

    Suppose \(e(\sigma _2, [\tau ]_2) = e(\sigma _3, [1]_2)\). We note that

    $$\begin{aligned}&\qquad \! e(\sigma _1,[\mathbf {A}]_2) = e([(1,\mathbf {m}^\top )]_1,[\mathbf {C}]_2) \cdot e(\sigma _2, [\mathbf {C}_0]_2) \cdot e(\sigma _3, [\mathbf {C}_1]_2) \\&\Longleftrightarrow e(\sigma _1,[\mathbf {A}]_2) = e([(1,\mathbf {m}^\top )]_1,[\mathbf {K}\mathbf {A}]_2) \cdot e(\sigma _2, [\mathbf {K}_0 \mathbf {A}]_2) \cdot e(\sigma _3, [\mathbf {K}_1 \mathbf {A}]_2) \\&\Longleftarrow e(\sigma _1,[1]_2) = e([(1,\mathbf {m}^\top )]_1,[\mathbf {K}]_2) \cdot e(\sigma _2, [\mathbf {K}_0]_2) \cdot e(\sigma _3, [\mathbf {K}_1]_2) \\&\Longleftrightarrow e(\sigma _1,[1]_2) = e([(1,\mathbf {m}^\top )]_1,[\mathbf {K}]_2) \cdot e(\sigma _2, [\mathbf {K}_0 +\tau \mathbf {K}_1]_2) \end{aligned}$$

    Hence, for any \(([\mathbf {m}]_1,\sigma )\) that passes \(\mathsf {Verify}\) but not \(\mathsf {Verify}^*\), the value

    $$\sigma _1 - ( [(1,\mathbf {m}^\top ) \mathbf {K}]_1 + \sigma _2 \mathbf {K}_0 + \sigma _3 \mathbf {K}_1) \in \mathbb {G}_1^{1 \times (k+1)}$$

    is a non-zero vector in the kernel of \(\mathbf {A}\), which is hard to be computed under the \(\mathcal {D}_k\)-\(\mathsf {KerMDH} \) assumption in \(\mathbb {G}_2\). This means that

    $$\begin{aligned} |\mathbf {Adv}_0 - \mathbf {Adv}_1| \le \mathbf {Adv}^\mathrm {kmdh}_{\mathcal {D}_{k},\mathsf {GGen}}(\mathcal {B}_0). \end{aligned}$$
  • Game 2. Let \(\tau _1,\ldots ,\tau _Q\) denote the randomly chosen tags in the Q queries to \(\mathsf {Sign}\mathsf {O}\). We abort if \(\tau _1,\ldots ,\tau _Q\) are not all distinct.

    $$\begin{aligned} \mathbf {Adv}_2 \ge \mathbf {Adv}_1 - Q^2/2q. \end{aligned}$$
  • Game 3. We define \(\tau _{Q+1} := \tau ^*\). Now, pick \(i^* \leftarrow _{\textsc {r}}[Q+1]\) and abort if \(i^*\) is not the smallest index i for which \(\tau ^* = \tau _i\). In the rest of the proof, we focus on the case we do not abort, which means that \(\tau ^* = \tau _{i^*}\) and \(\tau _1,\ldots ,\tau _{i^*-1}\) are all different from \(\tau ^*\). This means that given \(\tau \), \(\mathsf {Sign}\mathsf {O}\) can check whether \(\tau ^*\) equals \(\tau \): for the rest \(i^*-1\) queries, answer NO, and starting from the \(i^*\)’th query, we know \(\tau ^*\). It is easy to see that

    $$\begin{aligned} \mathbf {Adv}_3 \ge \frac{1}{Q+1} \mathbf {Adv}_2 . \end{aligned}$$
  • Game 4. Switch \(\mathsf {Sign}\mathsf {O}\) to \(\mathsf {Sign}\mathsf {O}^*\) where

    figure c

    We will use Lemma 3 to show that

    $$\begin{aligned} |\mathbf {Adv}_3 - \mathbf {Adv}_4| \le 2Q \mathbf {Adv}^\mathrm {mddh}_{\mathcal {D}_{k},\mathsf {GGen}}(\mathcal {B}_1) + Q/q \end{aligned}$$

    Basically, we pick \(\mathbf {K}\) ourselves and use \(\mathcal{O}_b\) to simulate either \(\mathsf {Sign}\mathsf {O}\) or \(\mathsf {Sign}\mathsf {O}^*\) and \(\mathcal{O}^*\) to simulate \(\mathsf {Verify}^*\) as follows:

    1. For the i’th signing query \([\mathbf {m}]_1\) where \(i \ne i^*\), we query \(\mathcal{O}_b\) at \(\tau \leftarrow _{\textsc {r}}\mathbb {Z}_q\) to obtain

      $$\begin{aligned} (\sigma _1',\sigma _2) := (\left[ b\mu {\mathbf {a}^{\perp }}+ \mathbf {r}^{\!\scriptscriptstyle {\top }}(\mathbf {{P}}_0 + \tau \mathbf {{P}}_1)\right] _{1}, \left[ \mathbf {r}^{\!\scriptscriptstyle {\top }}\mathbf {B}^{\!\scriptscriptstyle {\top }}\right] _{1}), \end{aligned}$$

      and we return

      $$ (\sigma _1:= [(1,\mathbf {m}^{\!\scriptscriptstyle {\top }})\mathbf {K}]_1 \cdot \sigma '_1,\; \sigma _2,\; \sigma _3 := \sigma _2 \tau ,\; \sigma _4 := [\tau ]_2)$$
    2. For the \(i^*\)’th signing query \([\mathbf {m}]_1\) where \(i^* \le Q\), we run \(\mathsf {Sign}\) honestly using our knowledge of \(\mathbf {K},[\mathbf {{P}}_0]_1,[\mathbf {{P}}_1],[\mathbf {B}]_1\).

    3. For \(\mathsf {Verify}^*\), we will query \(\mathcal{O}^*\) on \([\tau ^*]_2\) to get \([\mathbf {K}_0 + \tau ^* \mathbf {K}_1]_2\). The latter is sufficient to simulate the \(\mathsf {Verify}^*\) query by computing \(e(\sigma _2, [\mathbf {K}_0 + \tau ^*\mathbf {K}_1]_2)\).

    This allows us to then build a distinguisher for Lemma 3.

  • Game 5. Switch \(\mathbf {K}\leftarrow _{\textsc {r}}\mathbb {Z}_q^{(n+1) \times (k+1)}\) in \(\mathsf {Gen}\) to \(\mathbf {K}:= \mathbf {K}' + \mathbf {u}{\mathbf {a}^{\perp }}\), where \(\mathbf {K}' \leftarrow _{\textsc {r}}\mathbb {Z}_q^{(n+1) \times (k+1)}, \mathbf {u}\leftarrow _{\textsc {r}}\mathbb {Z}_q^{n+1}\). Since \(\mathbf {u}{\mathbf {a}^{\perp }}\) is masked by a uniform matrix \(\mathbf {{K}}'\), \(\mathbf {{K}}\) in Game 5 is still uniformly random and thus Game 4 and 5 are identical. We have

    $$\mathbf {Adv}_5=\mathbf {Adv}_4.$$

    To conclude the proof, we bound the adversarial advantage in Game 5 via an information-theoretic argument. We first consider the information about \(\mathbf {u}\) leaked from \( \mathsf {pk}\) and signing queries:

    1. \(\mathbf {C}= (\mathbf {K}' + \mathbf {u}{\mathbf {a}^{\perp }})\mathbf {A}= \mathbf {K}'\mathbf {A}\) completely hides \(\mathbf {u}\);

    2. the output of \(\mathsf {Sign}\mathsf {O}^*\) on \((\mathbf {m},\tau )\) for \(\tau \ne \tau ^*\) completely hides \(\mathbf {u}\), since \((1,\mathbf {m}^{\!\scriptscriptstyle {\top }}) (\mathbf {K}' + \mathbf {u}{\mathbf {a}^{\perp }}) + \mu {\mathbf {a}^{\perp }}\) is identically distributed to \((1,\mathbf {m}^{\!\scriptscriptstyle {\top }}) \mathbf {K}' + \mu {\mathbf {a}^{\perp }}\) (namely, \((1,\mathbf {m}^{\!\scriptscriptstyle {\top }}) \mathbf {u}\) is masked by \(\mu \leftarrow _{\textsc {r}}\mathbb {Z}_q\)).

    3. the output of \(\mathsf {Sign}\mathsf {O}^*\) on \(\tau ^*\) leaks \((1,\mathbf {m}^{\!\scriptscriptstyle {\top }}) (\mathbf {K}' + \mathbf {u}{\mathbf {a}^{\perp }})\), which is captured by \((1,\mathbf {m}^{\!\scriptscriptstyle {\top }}) \mathbf {u}\).

    To convince \(\mathsf {Verify}^*\) to accept a signature \(\sigma ^*\) on \(\mathbf {m}^*\), the adversary must correctly compute

    $$\begin{aligned} (1,{\mathbf {m}^*}^{\!\scriptscriptstyle {\top }}) (\mathbf {K}' + \mathbf {u}{\mathbf {a}^{\perp }}) \end{aligned}$$

    and thus \((1,{\mathbf {m}^*}^{\!\scriptscriptstyle {\top }}) \mathbf {u}\in \mathbb {Z}_q\). Given \((1,\mathbf {m}^{\!\scriptscriptstyle {\top }}) \mathbf {u}\), for any adaptively chosen \(\mathbf {m}^* \ne \mathbf {m}\), we have that \((1,{\mathbf {m}^*}^{\!\scriptscriptstyle {\top }}) \mathbf {u}\) is uniformly random over \(\mathbb {Z}_q\) from the adversary’s view-point. Therefore, \(\mathbf {Adv}_5 \le 1/q\). \(\square \)

4.3 Extension: SPS for Bilateral Message Spaces

Let \(\mathcal {M}:= \mathbb {G}_1^{n_1} \times \mathbb {G}_2^{n_2}\) be a message space. In Type III pairing groups, \(\mathcal {M}\) is bilateral if both \(n_1\ne 0\) and \(n_2\ne 0\); otherwise, \(\mathcal {M}\) is unilateral. We extend the construction from Sect. 4.2 to sign bilateral message spaces.

The main idea of our construction is to use the Even-Goldreich-Micali (EGM) framework [23] and a method of Abe et al. [2]: for \(\mathbf {m}=([\mathbf {m}_1]_1,[\mathbf {m}_2]_2)\in \mathbb {G}_1^{n_1}\times \mathbb {G}_2^{n_2}\) we sign \([\mathbf {m}_1]_1\) by using a one-time SPS with a fresh public key \(\mathsf {pk}_{\mathsf {ot}}\) over \(\mathbb {G}_2\) and then sign message \(([\mathbf {m}_2]_2,\mathsf {pk}_{\mathsf {ot}})\) using an unbounded \(\mathsf {CMA}\)-secure SPS; the signature on \(([\mathbf {m}_1]_1,[\mathbf {m}_2]_2)\) is \(\mathsf {pk}_{\mathsf {ot}}\) together with the concatenation of both signatures. However, this yields long signatures as \(\mathsf {pk}_{\mathsf {ot}}\) contains \(O(n_1 k)\) group element for the best known one-time SPS. Next, we observe that our one-time SPS is in fact a so-called “two-tier” signature scheme [12], i.e. \(\mathsf {opk}\) can decomposed into a reusable long primary key plus a one-time short secondary key which contains only k group elements. For the transformation sketched above it is sufficient to put the short secondary key in the signature which leads to short signatures.

Details about our two-tier SPS and generic transformation are given in the full version [35]. The resulting unbounded \(\mathsf {CMA}\)-secure SPS for bilateral message spaces is shown in Fig. 4. Its parameters are: \( | \mathsf {pk}| = (n_1+n_2)k+3(k+1)k+2\mathsf {RE}(\mathcal {D}_k), |\sigma | = (4k+3,k+2) , \text { and } \#\text {equations}=3k+1\). Notation (xy) represents x elements in \(\mathbb {G}_1\) and y elements in \(\mathbb {G}_2\). Under the \(\mathsf {SXDH} \) assumption, our scheme achieves \((| \mathsf {pk}|,|\sigma |, \# \text {equations})=(n_1+n_2+8,(7,3),4)\). Compared with \((n_1+n_2+22,(8,6),5)\) of [2], we obtain better efficiency under standard assumptions. The following theorem is proved in the full version [35].

Theorem 3

Under the \(\mathcal {D}_k\)-\(\mathsf {MDDH} \) Assumption in \(\mathbb {G}_1\) and \(\mathcal {D}_k\)-\(\mathsf {KerMDH} \) Assumption in both \(\mathbb {G}_1\) and \(\mathbb {G}_2\), \(\mathsf {BSPS_{full}}\) from Fig. 4 is an unbounded \(\mathsf {CMA}\)-secure structure-preserving signature scheme.

Fig. 4.
figure 4

Structure-preserving signature \(\mathsf {BSPS_{full}}\) for bilateral message spaces \(\mathcal {M}= \mathbb {G}_1^{n_1} \times \mathbb {G}_2^{n_2}\).

5 Security Against Random Message Attacks

In this section, we consider possible efficiency improvements on the structure-preserving signatures (SPS) from Sects. 3 and 4 for the weaker security notion of unforgeability against random message attacks (\(\mathsf {RMA}\)). Precisely, we obtain a one-time \(\mathsf {RMA}\)-secure SPS with signature size one less than that from Fig. 2 and an unbounded \(\mathsf {RMA}\)-secure SPS with signature size \(k+1\) less than that from Fig. 3. Figure 5 summarizes our results.

Our \(\mathsf {rSPS}_\mathsf {ot}\) is optimal for both the Type I and III settings: in the Type I setting, under the 2-\(\mathsf {Lin}\) assumption, \(\mathsf {rSPS}_\mathsf {ot}\) requires 2 elements and 2 verification equations, matching the lower bound for one-time \(\mathsf {RMA}\)-secure SPS from [8]; in the Type III setting, under the \(\mathsf {SXDH}\) assumption, \(\mathsf {rSPS}_\mathsf {ot}\) requires 1 element and 1 verification equation, which is clearly optimal.

Fig. 5.
figure 5

Structure-preserving signatures secure against random message attacks for \(\mathcal {M}=\mathbb {G}_1^n\) in the Type I and III setting. For the Type I setting we have \(\mathbb {G}=\mathbb {G}_1=\mathbb {G}_2\). Notation (xy) represents x elements in \(\mathbb {G}_1\) and y elements in \(\mathbb {G}_2\).

5.1 Unforgeability Against Random Message Attacks

\(\mathsf {RMA}\)-security states that it is hard for an adversary to forge a signature even if he sees many signatures on randomly chosen messages. The security is formally defined as follows:

Definition 6

(Unforgeability against random message attacks). To an adversary \(\mathcal {A}\) and \(\mathsf {SPS}\) we associate the advantage function

$$\begin{aligned} \mathbf {Adv}^\mathrm {rma}_\mathsf {SPS}(\mathcal {A}) := \Pr \left[ [\mathbf {m}^*] \notin {\mathcal Q}_{\mathrm {msg}}\wedge \mathsf {Verify}( \mathsf {pk},[\mathbf {m}^*],\sigma ^*)=1 \left| \begin{array}{l} ( \mathsf {pk}, \mathsf {sk})\leftarrow _{\textsc {r}}\mathsf {Gen}(\mathsf {par}) \\ ([\mathbf {m}^*],\sigma ^*) \leftarrow _{\textsc {r}}\mathcal {A}^{\mathsf {Sign}\mathsf {O}()}( \mathsf {pk}) \end{array} \right. \right] , \end{aligned}$$

where \(\mathsf {Sign}\mathsf {O}()\) chooses a random message \([\mathbf {m}] \leftarrow _{\textsc {r}}\mathbb {G}^n\), runs \(\sigma \leftarrow _{\textsc {r}}\mathsf {Sign}( \mathsf {sk},[\mathbf {m}])\), adds the vector \([\mathbf {m}]\) to \({\mathcal Q}_{\mathrm {msg}}\) (initialized with \(\emptyset \)) and returns \(([\mathbf {m}],\sigma )\) to \(\mathcal {A}\). \(\mathsf {SPS}\) is said to be RMA-secure if for all PPT adversaries \(\mathcal {A}\), \(\mathbf {Adv}^\mathrm {rma}_\mathsf {SPS}(\mathcal {A})\) is negligible. \(\mathsf {SPS}\) is said to be one-time RMA-secure with corresponding advantage function \(\mathbf {Adv}^{\mathrm {ot}\text{- }\mathrm {rma}}_\mathsf {SPS}(\mathcal {A})\), if \(\mathcal {A}\) is restricted to make at most one query to oracle \(\mathsf {Sign}\mathsf {O}\).

5.2 One-Time RMA-Secure SPS

Motivated by the techniques used in [1, 34, 36] to obtain shorter QANIZK proofs for linear subspaces, we construct a one-time \(\mathsf {RMA}\)-secure SPS in Fig. 6 with the following parameters:

$$| \mathsf {pk}|=(n+1) k + \mathsf {RE}(\mathcal {D}_k), \qquad |\sigma |=k.$$

For k-Lin, this yields \((n+2,1)\) for \(k=1\) and \((2n+4,2)\) for \(k=2\). Moreover, we note that verification needs k pairing product equations for \(e(\sigma _1, \left[ \overline{\mathbf {A}}\right] _{2}) = e([(1,\mathbf {m})]_1,[\mathbf {C}]_2) \). Compared with \(\mathsf {SPS}_\mathsf {ot}\), we reduce the signature size by one element.

Fig. 6.
figure 6

One-time \(\mathsf {RMA}\)-secure structure-preserving signature \(\mathsf {rSPS}_\mathsf {ot}\) with message-space \(\mathcal {M}= \mathbb {G}_1^n\). Recall that \(\overline{\mathbf {A}}\) denotes the upper \(k\times k\) submatrix of \(\mathbf {A}\).

Theorem 4

Under the \(\mathcal {D}_k\)-\(\mathsf {KerMDH} \) Assumption in \(\mathbb {G}_2\), \(\mathsf {rSPS}_\mathsf {ot}\) from Fig. 6 is a one-time \(\mathsf {RMA}\)-secure structure-preserving signature scheme.

Our proof is similar to that in [36, Theorem 2]. As we choose \(\mathbf {m}\in \mathbb {Z}_q^n\) in the security game ourselves, we can compute the kernel basis \(\mathbf {{M}}^\bot \in \mathbb {Z}_q^{(n+1) \times n}\) of \((1,\mathbf {m}^\top )\) such that \((1,\mathbf {m}^\top ) \cdot \mathbf {{M}}^\bot = \mathbf {{0}}\) and then we embed \(\mathbf {{M}}^\bot \) in the secret key \(\mathbf {{K}}\). This way we do not need to compute the kernel of \([\mathbf {{A}}]_2\) when answering the signing query. However, for the forgery \(\mathbf {m}^*\ne \mathbf {m}\), since \((1,\mathbf {m}^{*\top })\mathbf {{M}}^\bot \ne \mathbf {{0}}\), the adversary has to compute an element from the kernel to break \(\mathsf {RMA}\)-security, which is infeasible under the \(\mathcal {D}_k\)-\(\mathsf {KerMDH} \) Assumption.

Fig. 7.
figure 7

An unbounded \(\mathsf {RMA}\)-secure structure-preserving signature \(\mathsf {rSPS}_\mathsf {full}\) with message-space \(\mathcal {M}= \mathbb {G}_1^{n} \) where \(n=n'+k+1 \ge k+1\).

5.3 Unbounded RMA-Secure SPS

Consider the scheme \(\mathsf {SPS}_\mathsf {full}\) from Fig. 3 with the modification that in the signing algorithm, vector \(\mathbf {{B}} \mathbf {r}\) is chosen as a random vector as \(\mathbf {t}\leftarrow _{\textsc {r}}\mathbb {Z}_q^{k+1}\). Clearly, under the \(\mathcal {D}_k\)-\(\mathsf {MDDH} \) Assumption, this modified scheme is also a CMA-secure SPS. Suppose that the message space is \(\mathbb {G}_1^{n}\) with \(n=n'+k+1 \ge k+1\). Then we can view the random vector \([\mathbf {t}]_1\in \mathbb {G}_1^{k+1}\) as part of the message space which reduces the signature size from \(3k+4\) elements to \(2k+3\). The modified scheme is presented in Fig. 7. Its parameters are:

$$\begin{aligned} | \mathsf {pk}| = (n+1)k+2(k+1)k+\mathsf {RE}(\mathcal {D}_k),\qquad |\sigma | = (2(k+1), 1), \end{aligned}$$

where notation (xy) represents x elements in \(\mathbb {G}_1\) and y elements in \(\mathbb {G}_2\). For k-Lin, \((| \mathsf {pk}|,|\sigma |)=(n+6,(4,1))\) for \(k=1\) and \((2n+16,(6,1))\) for \(k=2\). Moreover, we note that the verification needs \(2k+1\) pairing product equations. Compared to the \(\mathsf {SPS}_\mathsf {full}\) from Fig. 3, \(\mathsf {rSPS}_\mathsf {full}\) requires \((k+1)\) elements less in the signature.

Theorem 5

Under the \(\mathcal {D}_k\)-\(\mathsf {MDDH} \) Assumption in \(\mathbb {G}_1\) and \(\mathcal {D}_k\)-\(\mathsf {KerMDH} \) Assumption in \(\mathbb {G}_2\), \(\mathsf {rSPS}_\mathsf {full}\) from Fig. 7 is an unbounded \(\mathsf {RMA}\)-secure structure-preserving signature scheme.

The proof is given in [35].