Quantum-Resistant Forward-Secure Digital Signature Scheme Based on q-ary Lattices

 In this paper, we design and consider a new digital signature scheme with an evolving secret key, using random q-ary lattices as its domain. It is proved that, in addition to offering classic eu-cma security, the scheme is existentially forward unforgeable under an adaptive chosen message attack (fu-cma). We also prove that the secret keys are updated without revealing anything about any of the keys from the prior periods. Therefore, we design a polynomial-time reduction and use it to show that the ability to create a forgery leads to a feasible method of solving the well-known small integer solution (SIS) problem. Since the security of the scheme is based on computational hardness of a SIS problem, it turns out to be resistant to both classic and quantum methods. In addition, the scheme is based on the “Fiat-Shamir with aborts” approach that foils a transcript attack. As for the key-updating mechanism, it is based on selected properties of binary trees, with the number of leaves being the same as the number of time periods in the scheme. Forward security is gained under the assumption that one out of two hash functions is modeled as a random oracle.


Introduction
This paper shows that the concepts and results from [1] may be adapted to a post-quantum domain.To this end, we design and analyze a new quantum-safe key evolving digital signature scheme and prove its forward security characteristics.
It is well known that the security of asymmetric crypto schemes is based on two computationally hard problems, namely factorization and DLP in finite groups of prime order.However, it should be noticed that the hardness of these problems is based solely on a classical computational model and, due to the famous Shor's algorithm (also the Kaye-Zalka's algorithm), both problems turn out to be of polynomial complexity in the quantum model.These facts, in conjunction with the acceleration of research leading to the creation of a quantum computer with a sufficiently large quantum register, compelled the US National Security Agency (NSA) to publish, in 2015, a report indicating the need to increase the length of all Suite-B scheme keys and to urgently work out solutions that are resistant to those threats.
Soon thereafter, the National Institute of Standards and Technology (NIST) organized a competition to update their standards to include post-quantum cryptography.NIST pointed out five domains that were assumed to be a substrate for quantum-safe problems.All this triggered an immediate response of the crypto community and a lot of effort has been invested into the research on a wide variety of post-quantum aspects.
After years of intensive research, with a particular focus being on breaking SIDH and Rainbow, it turned out that lattices seemed to be the most promising and flexible of all the suggested domains, serving as a substrate for modern asymmetric crypto primitives.These are the reasons why we have focused our considerations on the theory of lattices.
Signature schemes with an evolving private key [2], [3] are characterized by the fact that the entire lifetime of the public key is split into a certain number of sub-periods to which different secret keys are assigned.More precisely, in the first step, both the public key and the initial secret key are generated and assigned to the first period, and then the initial key is updated to the next period and so on, until the last period is reached.
Natural security requirements associated with these digital schemes are such that even if the secret key, assigned with a specific sub-period, has been reveled, then it is impossible to conduct a forgery for any of the previous sub-periods.This creates a formal security model known as forward security [2].
It should be noted that for any scheme of this sort, the updating mechanism is of crucial importance in terms of providing the required security needs.Therefore, in addition to the obvious explanation of unforgeability within separated time frames, it must be proven, above all, that the disclosure of a certain secret key reveals nothing about the past periods.
In other words, this mechanism must ensure a non-trivial property that a secret key associated with a given time frame must store nothing but data required for making current signatures and for generating a secret key for the next period.
The presented scheme is based on the so-called "Fiat-Shamir with aborts" approach [4], [5], proposed by Lyubashevsky [5], and is based on an idea derived from statistics and known as rejection sampling.
Despite the fact that the design itself has been inspired by many papers concerning different aspects of the lattice theory, [5]- [7] need to be considered as the key sources of the techniques behind our considerations.We were able to implement these different ideas to create a state-of-the-art and quantum-safe forward secure digital signature scheme.Vectors are presented in a column form and are denoted by lower-case letters in bold print (e.g.x).We view a matrix as a set of its column vectors and denote it by capital letters in bold print (e.g.A).The i-th entry of a vector x is denoted by x i , and the j-th column of a matrix A is denoted a j or A[j].

Preliminaries
We identify a matrix A with an ordered set {a j } of its column vector and define the norm of matrix A as the norm of its longest column, i.e., ∥A∥ = max j ∥a j ∥.If the columns of A = {a 1 , . . ., a k } are linearly independent, then A * = {a * 1 , . . ., a * k } denotes the Gram-Schmidt orthogonalization of vectors a 1 , . . ., a k taken in that order.For A ∈ R n×m1 and B ∈ R n×m2 , having an equal number of rows, [A|B] ∈ R n×(m1+m2) denotes the concatenation of the columns of A followed by the columns of B. Likewise, for A ∈ R n1×m and B ∈ R n2×m ′ , having an equal number of columns, [A; B] ∈ R (n1+n2)×m is the concatenation of the rows of A and the rows of B. Let I be a countable set and let {X n } n∈I , {Y n } n∈I be two families of random variables, such that X n , Y n take values in a finite set R n .We call {X n } n∈I and {Y n } n∈I statistically close if their statistical distance is negligible, where the statistical distance between {X n } n∈I and {Y n } n∈I is defined as the following function [8]: We conclude this section with a very useful and nontrivial result, called the forking lemma.A general version of this assertion is presented, and the basic forking lemma may be found in [9].Lemma 1.General forking lemma [10].Fix an integer ϱ and set H of size h 2. Let A a randomized algorithm returning, based on an input of x, h 1 , . . ., h ϱ a pair, with the first element thereof being is an integer in the range 0, . . ., ϱ and the other element being referred to as a side output.Let IG be a randomized algorithm that may be called an input generator.The accepting probability of A, denoted acc, is defined as the probability that J 1 in the experiment.
The forking algorithm F A associated with A is a randomized algorithm processes and input x in the following manner:

Lattices
Here, lattices are defined and some of their basic properties are presented.A comprehensive introduction to this theory may be found in [11].
It turns out that for every lattice L, there is a set of linearly independent vectors B = {b 1 , . . ., b k } such that any lattice point is an integer linear combinations of vectors form B, i.e.: .Additionally, this number does not depend on the choice of bases of the lattice, i.e. if L = L(B 1 ) = L(B 2 ) then vol P(B 1 ) = vol P(B 2 ).Hence, the measure of the fundamental parallelepipeds is invariant of the lattice, is called the determinant of the lattice L and is denoted by det L.
open sphere of radius r centered at 0. For any m-dimensional lattice L, the i-th minimum λ i (L) is the shortest radius r > 0 such that B m (0, r) contains i linearly independent lattice vectors: For any lattice L of rank k, λ 1 (L) . . .λ k (L) are constants, and λ 1 (L) is the length of the shortest vector in L, i.e. λ 1 (L) = min v∈L/{0} ∥v∥.
There are several natural computational problems relating to lattices, and for an evaluation of the complexity of different types of lattice problems, see [11], [12] for example.Due to the overall concept underpinning the paper, we recall the notion of approximate shortest independent vector problem (SIVP).Definition 2. For a given basis B of an m-dimensional lattice L = L(B) of rank k, the goal of the approximate shortest independent vector problem (SIVP γ ) is to find a set of k linearly independent lattice vectors , where the approximation factor γ = γ(m) is a function of dimension m.
We point out that the most famous and commonly used version of SIVP γ problem concerns the case of full-rank lattices, i.e. when k = m.
A full rank lattice L is called an integer lattice, if L ⊂ Z m , an integer lattice is called a q-ary lattice, if qZ m ⊂ L ⊂ Z m , where q ∈ Z 1 .Additionally, Z m / L is a finite group and Let n, m, q ∈ Z q 1 , n < m and A ∈ Z n×m q be a full-rank matrix.Below we define a q-ary lattice that is the main object of this paper.
By definition, L ⊥ q (A) ⊂ Z m , and for any x ∈ qZ m , Ax ≡ 0 (mod q), so we get qZ m ⊂ L ⊥ q (A) ⊂ Z m .This means that L ⊥ q (A) is indeed a full-rank q-ary lattice of dimension m.It is worth mentioning that matrix A which induces L = L ⊥ q (A) is called a parity matrix and it is not a base of L.Moreover, it turns out that if q is a prime, then | det L ⊥ q (A)| = |Z m / L ⊥ q (A)| = q n .For a given vector u ∈ Z n q , a lattice L ⊥ q (A) is associated with another lattice that is defined by: If v ∈ L u q (A), then the lattices L u q (A) and L ⊥ q (A) are tailored by the following relation L u q (A) = v + L ⊥ q (A).

Discrete Gaussian
For any real s > 0 and c ∈ R m , the Gaussian function ρ s,c centered on c with parameter s is defined as: and From the definition, we have ρ s (x) = ρ s −1 x .Let L ⊆ Z m be a lattice and let ρ s,c (L) = x∈L ρ s,c (x).Let us define discrete Gaussian distribution over L with center c, and parameter s as: For notational convenience: We summarize several facts from the literature about discrete Gaussians over lattices, again if focus to our interest area.Lemma 3. Let n < m and T A be any basis of L ⊥ q (A) for some A ∈ Z n×m q whose columns generate Z n q , let u ∈ Z n q and c ∈ Z m be arbitrary, and let s • ),s contains a set of m linearly independent vectors, except with negligible probability in n (see [6], [15]).Theorem 4.There is a probabilistic polynomial-time algorithm SampleD that, given a basis B of m-dimensional lattice L = L(B), a parameter s σ 1 (B) • ω √ log m , and a center c ∈ R m , outputs a sample from a distribution that is statistically close to D L,s,c .The above scheme is an alternative to the SampleD algorithm given in [17], and it is more efficient and fully parallelizable.Value σ 1 (B) is the largest singular value of B, which is never smaller than ∥B * ∥ but is also not much larger in most important cases -see [16] for details.Lemma 5. Let n, m, q ∈ Z >0 with q prime and let m 2n lg q.Then for all but the q −n fraction of all A ∈ Z n×m q , the subset-sums of the columns of A generate Z n q ; i.e. for every syndrome u ∈ Z n q there is x ∈ {0, 1} m , such that u ≡ Ax (mod q) [14], [17], [18].Lemma 6.Let n, m, q ∈ Z >0 with q prime, m 2n lg q, and s ω √ log m .If the columns of A ∈ Z m×n q generate Z n q , then for fixed u ∈ Z n q and an arbitrary solution t ∈ Z m to At ≡ u (mod q), the conditional distribution of e ← D m s , given Ae ≡ u (mod q) is exactly t + D L ⊥ q (A),s,−t [17].Lemma 7. Let n, m, q ∈ Z >0 with q prime and let m 2n lg q.Then, for all but the 2q −n fraction of all A ∈ Z n×m q and for any s ω √ log m , the distribution of the syndrome u ≡ Ax (mod q) is statistically close to uniform over Z n q , where x ← D m s .Besides, the conditional distribution of x ← D m s given u ≡ Ax (mod q) is D L u q (A),s [7], [11].Now a variant of the rejection sampling algorithm can be formulated, in which one generates samples from a desired probability distribution by using samples from another distribution.The use of this method will enable to achieve transcript security of presenting the signature scheme.We start with the lemma that allows to bound the success probability of the algorithm.Lemma 8.For any c ∈ Z m , if s = α∥c∥, for α ∈ R >0 , then [5]: Theorem 9 (Rejection sampling).Let V be a subset of Z m in which all elements have norms less than T , s be some element in R such that s = ω(T √ log m), and χ : V ∈ R be a probability distribution.Then, there exists a constant M = O (1), such that the distribution of the following algorithm A [5]: , c) with probability 1/M Moreover, the probability that A outputs something is at least . More specifically, if s = αT for any α ∈ R >0 , then M = exp 12  α + 1 2α 2 , the output of algorithm A is within statistical distance 2 −100 M of the output of F, and the probability that A outputs something is at least 1−2 −100 M .

Trapdoors for Lattices
Lemma 10.Let δ ∈ R >0 be any fixed constant and q ∈ Z 3 be odd.There is a universal constant C ∈ R >0 and a probabilistic polynomial-time algorithm TrapGen that, on an input of a uniformly random A 1 ∈ Z n×m1 q , for any m 1 d = (1 + δ)n lg q, and any integer m 2 (4 + 2δ)n lg q, outputs matrices U ∈ Z m2×m2 , G, R ∈ Z m1×m2 , P ∈ Z m2×m1 , such that U is nonsingular, and (GP+1 m1 ) ⊂ L ⊥ q (A 1 ) [19].Moreover: • T A ∈ Z m×m , given by the formula: is a short basis of L ⊥ q (A) and meets ∥T A ∥ Cn log q = O(n log q) with a probability of 1−2 −Ω(n) over the choice of R, Guided by a practical perspective, the above lemma could be reformulated in the following manner.Theorem 11.For n ∈ Z 1 , an odd q ∈ Z 3 and integer m 6n lg q, there is a probabilistic polynomial-time algorithm TrapGen that, on an input of q, n, m, outputs A ∈ Z n×m q and T A ∈ Z m×m , where A is (m • q −n/6 )-uniform over Z n×m q , and T A is a short (good) basis of L ⊥ q (A) except with negligible probability in n.

Small Integer Solution Problems
The small integer solution (SIS) problem [13], [18] aims to find a short non-zero, is to find a short nonzero integer solution x ∈ Z m to the homogeneous linear system Sx = 0 (mod q) for uniformly random S ∈ Z n×m q .Definition 12.The small integer solution problem SIS q,n,m,β (in the ℓ 2 norm) is: given q ∈ Z 1 , a uniformly random matrix S $ ← Z n×m q , and β ∈ R >0 , find a nonzero integer vector x ∈ Z m such that Sx ≡ 0 (mod q) and ∥x∥ β [5], [13].
Equivalently, the SIS problem asks to find a vector x ∈ L ⊥ q (S)/{0} with ∥x∥ β.It turns out that the distribution given by SIS has decent properties, so it is convenient to introduce its formal definition.Definition 13.SIS q,n,m,d distribution: choose a uniformly random matrix: and a vector e $ ← {−d, . . ., 0, . . ., d} m , and output (S, Se) [5].Remark 14.If d ≫ q n/m , then the SIS q,n,m,d distribution is actually statistically close to uniform over Z n×m q × Z n q (by the leftover hash lemma) [5].An homogeneous variant of SIS problem, called ISIS, is presented below.Definition 15.The inhomogeneous small integer solution problem ISIS q,n,m,β (in the ℓ 2 norm) is: q , and β ∈ R >0 , find an integer vector x ∈ Z m , such that Sx ≡ u (mod q) and ∥x∥ β [17].
Both problems turn out to be as hard as worst-case SIVP problem (see subsection 2.1).Theorem 16.For any positive integers n, m, real β = poly(n) and prime q β • ω √ n log n , the average-case problem SIS q,n,m,β and ISIS q,n,m,β are as hard as the worst-case prob- , [17].
The next lemma plays an important role in the proof of the main result.It is inspired by [15].Lemma 17. Assume that d ∈ Z 9 and m > 24 + n lg q lg (2d + 1) .
Then, for any matrix A ∈ Z n×m q and for uniformly random e Proof.The matrix A can be viewed as an operator transforming Z m into Z n q , therefore #A(Z m ) = q n .This means that in the set m ⊂ Z m , there are at most q n different elements that do not collide with each other.Since m can be estimated as: Consequently, the probability of an event that there are at least two colliding elements in:

One-way Preimage Sampleable Functions
This subsection presents some important mechanisms that ensure the high level of security of the proposed work.We start by reiterating the definition of a one-way preimage sampleable function.Definition 18.For a given value n of a security parameter, a family of preimage sampleable functions (PSFs) is a tuple of PPT algorithms (TrapGen, SampleDom, SamplePre) satisfying the following conditions [17]: • Generating a representative with a trapdoor: TrapGen(q, n, m) outputs (A, T ) with an overwhelming probability, where A is the description of an efficientlycomputable (representative) function f A : D n → R n (for some efficiently-recognizable domain D n and range R n depending on n), and T is some trapdoor information for f A .For the remaining properties, fix some (A, T ) ← TrapGen(q, n, m).

• Domain sampling with almost uniform output:
SampleDom(1 n ) samples an x ∈ D n from some (possibly non-uniform) distribution over D n , for which the distribution of f A (x) is statistically close to uniform over R n .
• Preimage sampling with trapdoor: for every y ∈ R n , SamplePre(T, y) samples from a distribution that is statistically close to the conditional distribution of x ← SampleDom(1 n ), given f A (x) = y.Definition 19.For a given value n of a security parameter, a PSFs is called a one-way PSFs, if the additional condition holds [17]: • One-way nature, without a trapdoor: for any PPT algorithm A, the probability that negligible, where the probability is taken over the choice of A, the target value y ← R n chosen uniformly at random, and A's random coins.We now show a one-way PSFs design based on the averagecase hardness of ISIS.This is adapted directly from [17] and plays a key role in the main part of this paper.To this end, let q ∈ Z 3 be odd and m 6n log q be an integer: and a trapdoor T A ∈ Z m×m , where A is is statistically close to a uniform matrix in Z n×m q , and T A is a short basis of L ⊥ q (A) except with negligible probability in n.
2) The input distribution is D m s and is sampled using the algorithm SampleD from Theorem 4 with the standard basis for Z m .Correctness holds because a sample e ← D m s lands in the D m domain (except with negligible probability), by Lemma 3.1, and for all but an exponentially small fraction of all A ∈ Z n×m q , f A (e) is statistically close to uniform over R n , by Lemma 7.
3) Preimage sampling with trapdoor is conducted by the trapdoor inversion algorithm called, in this specific case, SampleISIS, taking A, T A , s, and u ∈ Z n q as input and sampling from f −1 A (u) as follows: • choose an arbitrary t ∈ Z m such that At ≡ u (mod q) (by Lemma 5, such a t exists for all but at most q −1 fractions of A; such a t, with relatively large norm, can be efficiently find via elementary linear algebra) , Theorem 4 implies that SampleD samples from a distribution that is statistically close to D L ⊥ q (A),s,−t .Then by Lemma 6 SampleISIS samples from the appropriate conditional distribution D L u q (A),s .It is important to sample the input from the discrete Gaussian D m s , rather than sampling from a continuous Gaussian over R m with parameter s and rounding off each coordinate to the nearest integer (see [17] for details).Theorem 20.The algorithms described above gives a family of one-way PSFs if ISIS q,n,m, √ m is hard [17].We summarize the most important ideas of the preceding discussion in the following theorem.Theorem 21.Let n, m, q ∈ Z >0 be such that q is a prime, and m 6n lg q.There is a PPT algorithm -SampleISIS -that, on input A ∈ Z n×m q , its associated trapdoor Furthermore, Ae ≡ u (mod q) with overwhelming probability.We conclude this subsection with a useful generalization of SampleISIS that is quite important both theoretically and practically.In the notation of Theorem 21, let u 1 , u 2 , . . ., u k be an ordered set of vectors, which are the columns of a matrix U ∈ Z n×k q .Let k-SampleISIS be an algorithm that takes on input A ∈ Z n×m q , its associated trapdoor

and U. It works as follows:
• For subsequent j's from 1 to k, SampleISIS(A, T A , s, u j ) is run to output e j ∈ Z m from distribution D L u j q (A),s .• Having given the ordered set {e j } j∈ [k] , matrix E = {e 1 , . . .e k } is the output.
As the matrix U induces q-ary lattices L uj q (A) and they, in turn, induce a set of discrete Gaussian distributions D L u j q (A),s , the k-SampleISIS algorithm outputs E from the joint distribution of D L u j q (A),s j∈ [k] .In order to ease notation, this distribution will be denoted by D k,U q,A,s .In addition, note that if E ← (A, T A , s, U) then A, U and E are related by the formula AE ≡ U (mod q).Thereby, we have proven the following theorem.Theorem 22.Let n, k, m, q ∈ Z >0 be such that q is prime, and m 6n lg q.There is a PPT k-SampleISIS algorithm that, on input A ∈ Z n×m q , its associated trapdoor q,A,s .Furthermore, matrices A, U and E are related by the AE ≡ U (mod q) formula with overwhelming probability.

Base Extension Mechanism
In subsection 2.3 we proved that a random lattice from our family of interest can be generated together with a relatively good (short) basis for the lattice.In this case, we say that the lattice is under control.The following theorem shows that we may extend this control of a lattice to an arbitrary higher-dimensional extension, without any loss of quality in the resulting basis.Theorem 23.There is a deterministic polynomial-time algorithm ExtBasis with the following properties: given an arbitrary matrix A 1 ∈ Z n×m1 q whose columns generate the entire group Z n q , an arbitrary basis T A1 ∈ Z m1×m1 of L ⊥ q (A 1 ), and an arbitrary Moreover, the same holds even for any given permutation of the columns of A, e.g. if columns of A 2 are both appended and prepended to A [6].
As an immediate conclusion of the last theorem we obtain the following assertion.Theorem 24.There is a deterministic polynomial-time algorithm ExtBasis with the following properties: given an arbitrary A 2 ∈ Z n×m2 q whose columns generate the entire group Z n q , an arbitrary basis T A2 ∈ Z m2×m2 of L ⊥ q (A 2 ), and an arbitrary 3. Forward Security of Schemes with Evolving Private Key

Schemes with Evolving Private Key
Forward-secure signature schemes are based on schemes with an evolving private key, which are defined as a tuple of PPT algorithms Π = (G , KGen, KUpd, Sign, Vrfy) along with a message space M, such that they fulfill the following properties: • Generation of system parameters G is a PT algorithm which, on input of a security parameter value of 1 n and with a maximum number of time periods T , outputs the system parameters params.
• Key generation KGen is a PPT algorithm which, on input the system parameters params and with a maximum number of time periods T , outputs a public verification key pk with an initial secret signing key sk 0 for the initial period t = 0.
• Key update KUpd is a PPT algorithm.It takes on input the secret key sk t for the time period t < T − 1, and outputs the secret key sk t+1 for the subsequent t + 1. • Signing Sign is a is a PPT algorithm which takes on input the current secret key sk t and a message m ∈ M and outputs a signature σ. • Verification algorithm Vrfy is a DPT algorithm that, on input a public key pk, a message m ∈ M, the proper time period t and a (purported) signature σ, outputs a single bit b, where b = 1 means accepted and b = 0 means rejected.

Security Models of Schemes with Evolving Private Key
The presented security model is taken from [2].Let A be an adversary and assume that the system parameters have been generated and they have been revealed to the adversary.Let us consider the following experiment Exp fu-cma A,Π : 1) Generate params ← G (1 n , T ) and (sk 0 , pk) ← KGen(params, T ).
2) The adversary A is given pk and granted access to three oracles: signing oracle Sign, key update oracle KUpd and break in oracle Break.
4) while t < T 4.1.Sign : For current secret key sk t the adversary A requests signatures on as many messages as it likes (analogously to euf-cma it is denoted by A Sign sk t (•) (pk)).

If
Break then break the loop while.
Break : If A is intended to move to the forge phase, then it launches Break.Then the experiment records the break-in time t = t and sends the current signing key skt to A. This oracle can only be queried once, and after it has been queried, the adversary can make no further queries to the key update or signing oracles.

5) Eventually
6) If t ⋆ < t and Vrfy pk (t ⋆ , m ⋆ , σ ⋆ ) = 1 and the signing oracle Sign sk t ⋆ has been never queried about m ⋆ within the time period t ⋆ , then output 1. Otherwise output is 0. We refer to such an adversary as an fu-cma-adversary.The advantage of A in attacking the scheme Π is defined as: A signature scheme is called to be forward-secure if no efficient adversary can succeed in the above game with nonnegligible probability.
Definition 25.A signature scheme with an evolving private key Π = (G , KGen, KUpd, Sign, Vrfy), is called to be existentially forward unforgeable under a chosen-message attack or just forward-secure if for all efficient PPT adversaries A, their advantage Adv fu-cma Π,n (A) is a negligible function of n.

Construction of Forward-Secure Scheme
We start with a high-level description of the key updating mechanism that is based on some properties of binary-trees.Let ℓ ∈ Z 1 be a fixed height of a tree.Then, the tree consists of 2 ℓ leaves and, in our interpretation, each leaf is associated with a singular period (and a secret key).One may see that for every node there is a unique path joining the root with this node (in particular, a leaf).Let us adopt a rule that for a given node, 0 is assigned to its left branch and 1 to its right branch (see Fig. 1).
This implies that each of these paths is uniquely represented by a bit-string , where h ∈ [ℓ] 0 is the node height.Furthermore, this bit-string also uniquely indicates the node itself and, thus, we associate it with this node as its identifier.In particular, for a time period t ∈ [2 ℓ -1] 0 , the bit-string expresses the unique path from the root to the leaf t, and simultaneously identifies t as its binary representation.
Let A (tj ) j , t j ∈ [1] 0 and j ∈ [ℓ-1] 0 be chosen uniformly and independently form Z n×m q , and let (A ℓ , T A ℓ ) ← TrapGen(q, n).We define a node corresponding to the identifier Note that if P is the path connecting the root with a Leaf(t . As the matrices A (tj ) j , A ℓ are assumed to be publicly known, anyone can create an arbitrary node.Further, note that any node Node and, thus, it can be viewed as a parity matrix which generates a q-ary lattice . The secret related with this lattice is its short basis, i.e. a trapdoor T Node(t ℓ-1 •••t h ) 2 , which is also a secret associated with the node Node(t ℓ-1 • • • t h ) 2 .Obviously, it is computationally hard to derive a trapdoor from a given node.However, it turns out that knowledge of the root's secret, along with inherent properties of the binary-tree structure, increases computational efficiency.The role of the tree's root is played by the matrix A ℓ , whereas the associated secret is its trapdoor T A ℓ being also a secret master-key for the scheme.
In order to get a secret related to a node Node(t ℓ−1 ), The same idea can be iteratively applied to get this trapdoor directly from the root . The above implies that if there is a path connecting the root with a leaf, which contains Node(t ℓ-1  In addition, each secret key must consist of two components, one to make signatures, and one consisting of the minimal set of roots that allows to compute leaves associated with all future periods (see Fig. 2).In this proposal, these nodes are stored in a stack whose height equals ℓ at most, and the stack itself is filled in by using the StackFilling function defined by Algorithm 2 (see subsection 4.2).

Generation of System Parameters
Let n be a value of the security parameter.An efficient and polynomial-time system parameters generator algorithm G , on input n and a binary tree height ℓ, outputs params = (q, m, η min , k, r, T, h, H), where: • q = poly(n), q 3 is a prime, • m ⌈6n lg q⌉, • η min is a required minimal entropy of the hash function h, necessary satisfying any crystallographic requirements), • T = 2 ℓ is the number of periods.

Key Generation
Before we describe the initial key's generation process, a very useful algorithm -StackFilling -is presented, allowing to obtain a new secret key along with a minimal set of trapdoors required to get keys for the consecutive periods.This set is denoted by Stack and, technically speaking, it is a stack that stores pairs (T, h), where h is the height of a trapdoor T on the binary tree.Note that h varies in the range of 0 to ℓ and, in particular, ℓ is the height of the root.

Algorithm 2. StackFilling
With this ingredient in place, the KGen algorithm could be described, with security parameter n acting as its input.With a maximum number of time periods T , it outputs an initial secret key sk 0 along with a long-term public key pk.Once a set of parameters params has been generated, then the formal definition of KGen is (Fig. 3): 1) Choose a matrix U uniformly at random from 3) Launch the TrapGen(q, n) algorithm to get (A ℓ , T A ℓ ), where A ℓ ∈ Z n×m q is a matrix and , and update params ← (params, s 0 ); s 0 is a parameters for discrete Gaussian distribution, 0 , U).

Key Update
KUpd takes as an input the secret key sk t associated with t < T − 1, and generates a secret key for the next period t + 1.This algorithm works as follows:

4.2
The secret key for the new period is of the form sk t = (scp, Stack, t).

Signing
The signing algorithm Sign uses, as input, the secret key sk t associated with a period t < T and message msg ∈ M, and outputs a value of the signature.As explained in the previous section, secret keys consist of three components, scp, Stack, and t.All of them plays a different role in the process of signing a message.Namely, Stack stores minimal amount of data required to update the secret key to the next period, while the signing component is needed only for making signatures within a current period.
To sign a message msg ∈ M using sk t , the signing algorithm Sign performs the following steps: associated with period t.Note that A t = Leaf(t), and scp ∈ sk t has the form of scp = T At , 2) Run k-SampleISIS, with its input being a tuple (A t , scp, s 0 , U), in order to get a secret ephemeral key , and r $ ← {0, 1} n .5) Compute x 1 ← A t a + Ub (mod q), x 2 ← H(msg, r), and derive σ 1 ← h(x 1 , x 2 ).
Remark 26.The explanation why the estimates for s 1 and s 2 are as presented in point 3) is given in subsection 4.6.

Verification
The verification algorithm Vrfy takes as input t, pk, msg, and σ, and outputs one out of two values: accepted or rejected., d) and ∥σ 2 ∥ s 2 (ℓ + 1)m, then output accepted , otherwise return rejected.

Correctness
In order to evaluate the correctness of the scheme, let us suppose that σ = (σ 1 , σ 2 , r) is a signature of msg for a period t.Recall that σ 1 = h(x 1 , x 2 ), where x 1 = A t a + Ub (mod q), x 2 = H(msg, d), and that σ 2 = E t σ ′ 1 + a (mod q), where σ ′ 1 = σ 1 + b (mod q).Then we have: This means that: Although vectors σ ′ 1 and σ 2 come from the distributions D k σ1,s1 and D , respectively, the target distributions for them that we will be aiming for are D k s1 and D (ℓ+1)m s2 .We will apply the rejection sampling, Theorem 9 shows that for an appropriately-chosen values of M and s (steps 6-7), of Sign will output values whose probabilities equal approximately 1/M and the statistical distance between the outputs is statistically close to the distribution in which σ ′ , respectively.Due to this fact and following Lemma 3, the flowing estimation ∥σ 2 ∥ s 2 (ℓ + 1)m is obtained, with overwhelming probability.Moreover, according to the assumption, . By Lemma 8 the following conditions hold: , with probabilities of at least 1 − 2 −100 .Due to the fact that Theorem 9 requires: we conclude that: . It is easily seen that the optimal choice is: , respectively, from Theorem 4, in order to use SampleD, the following conditions ought to be satisfied: The use of identity matrices 1 results from the fact that we consider lattices Z k and Z (ℓ+1)m with their respective (short) canonical bases.
On the other hand, b and E t are taken from distributions D k s1 and D k,U q,At,s , respectively.Therefore, from Theorem 4, we get with overwhelming probability, that: Furthermore: The estimates ( 3) and ( 4) imply that: According to (4) and Lemma 8, the following condition must hold: Similarly, conditions ( 5) -( 6) and Lemma 8 imply: In consequence, by combining (1) with ( 6) and ( 2) with ( 7) the postulated estimates are obtained:

Security Proof
Here, we show that the considered key-evolving digital signature scheme is secure in terms of the forward-security model defined in Section 3. To this end, we prove that the ability to obtain a valid forgery leads us to the ability to construct a non-trivial solution to SIS problem.The proof itself exploits consequences of the general forking lemma and, due to this fact, it is conducted in the random oracle model.Theorem 27.Let n be a value of a security parameter, ℓ ∈ Z >0 , and Π = (G , KGen, KUpd, Sign, Vrfy) be a scheme considered in Section 4 with the associated message space M = {0, 1} * .If A is a fu-cma-adversary attacking Π in the random oracle model, which makes at most q h queries to the random oracle, then for β = (2s 2 + 2s 0 √ r) (ℓ + 1)m, there exists a PPT algorithm B attacking SIS q,n,(1+2ℓ)m,β problem with advantage: and a running time of O(time(A)).
Proof.Assume that A is an adversary attacking Π, and suppose that parameters params of Π have been generated by G (1 n , ℓ) as described in subsection 4.1.We will build an algorithm B which uses A as a subroutine and which is aimed to attack SIS q,n,(1+2ℓ)m,β problem, where β = (2s 2 + 2s 0 √ r) (ℓ + 1)m.To this end, let: be a random matrix given to B. According to the model and Definition 12, the challenge here is to find x ∈ Z (1+2ℓ)m as: Setup.The simulator sets the previously generated params (see also subsection 4.1) as the parameters.After this step the adversary B chooses a time frame t * according to the uniform A collision-resistant hash function h ∈ params is modeled as a random oracle and A is able to send at most q := q h queries to this oracle.In order to appropriately simulate the oracle's random behavior, vectors w 1 , w 2 , . . ., w q h are chosen uniformly at random from {w ∈ R k | w i ∈ {−1, 0, 1}, ∥w∥ √ r}.Next, there an ordered set W h = {w 1 , w 2 , . . ., w q h }, is designed, where the ordering relation "≼" is defined as follow w i ≼ w j if i < j.Let t * = (t * ℓ−1 , t * ℓ−2 , . . ., t * 0 ) 2 be a binary representation of t * .B sets the public key, such that: in the manner shown below: Having done this, B puts i .Remark 28.Although matrices A i is generated by TrapGen are not truly random, Theorem 11 shows that they are (m • q −n/6 )-uniform over Z n×m q .According to the assumed form of m (subsection 4.1), we get m • q −n/6 → 0 as n → 0, and this convergence is very fast.Moreover, note that for any c ∈ Z >0 , we have (n c m) • q −n/6 → 0 as n → 0. Therefore, there exists a positive n 0 , such that (n c m) • q −n/6 < 1 and equivalently m • q −n/6 < n −c , for n > n 0 .This means that m • q −n/6 = negl(n), i.e. the matrices A i are chosen from a distribution, whose statistical distance to the being uniform is negligible.
Further, B chooses ε ∈ R >0 .The best estimation is obtained when ε is a tiny number and puts d = s 0 (1 + ℓ)m, where a Gaussian distribution parameter s 0 is chosen in such a way, that the following two conditions hold: s 0 T max • (lg((ℓ + 1)m))  Finally, B updates params ← (params, s 0 ) and sends params along with pk to the adversary A. h-Query.In this phase, adversary A makes hash queries.B prepares a hash list L to record all queries and responses as follows: 1) At the beginning, list L is empty, 2) Let (x 1 , x 2 ) ∈ Z n q × {0, 1} n be a k-th query to h: a) If A has already asked about (x 1 , x 2 ) then list L consists of a pair (x 1 , x 2 ), h(x 1 , x 2 ) and, in this case, B outputs h(x 1 , x 2 ), b) Otherwise, the first element w ∈ W h which has not yet been used is taken (i.e. if w ′ ∈ W h is such that w ′ ≼ w, then w ′ has been already used), a pair ((x 1 , x 2 ), w) is appended to the list L and h(x 1 , x 2 ) = w is given as the output.
Queries.According to the security model given by Exp fu-cma A,Π (subsection 3.2), adversary A has access to three oracles: signing oracle, key update oracle, and break in oracle.Key update KUpd.Let t ̸ = t * , then there exists at least one Signing Sign.In order to simulate a signature on a given message msg, B acts depending on the value of a time frame.
• If t ̸ = t * , then B runs k-SampleISIS that takes as input a tuple (T At , A t , s 0 , U) and outputs an ephemeral key E t ∈ D k (ℓ+1)m from the distribution D k,U q,A,s .Having done this, a signature is affixed to msg as described in subsection 4.4, Remark 30.It must be emphasized that if t = t * , then E t * ← E * is not generated by k-SampleISIS, which rises a query whether B properly simulates the signing algorithm of the scheme.Fortunately, due to the use of rejection sampling, the distribution of σ 2 is statistically close to D (ℓ+1)m s2 and, in consequence, σ 2 is independent of E * .Therefore, simulation conducted by A, as for choosing ephemeral keys E, is indistinguishable from any real instantiation of Sign.
Break in Break.If the adversary A runs the break-in oracle, the current time period t is saved and the adversary is given the proper secret key skt (associated with t).This key consists of two components, namely scp and Stack.In order to generate scp, B proceeds in the same way as in KUpd.
When it comes to the Stack component, B launches StackFillingID (Algorithm 3) which, based on an empty stackID stack and t, returns (filled in stack) stackID consisting of the identifiers of nodes from Stack.

push ((tmp, h)) 6: End While
After that B is able to derive nodes corresponding to these identifiers.At first, it takes an identifier i }.Such i 0 exists, since there is not node in Stack that lies on the branch linking the root with Leaf(t * ).B knows a pair , hence it runs ExtBasis in order to obtain: where Forgery.By Remarks 28, 29, and 30, B is statistically indistinguishable from a real challenger in the experiment Exp fu-cma A,Π , for the considered scheme denoted herein by Π.So for A, the interaction with B is the same as while conducting the real experiment.Therefore, the adversary A eventually outputs a forgery for a time-frame t * .In the case that t * ̸ = t * , B aborts.Otherwise, B accepts the forgery, which is of the form (t * , msg * , σ * ), where σ Let w i ∈ W h be such that σ * 1 = w i .Recall that answers to the successive h-queries are taken successively form W h and in accordance with the "≼" relation.This means that just before A sent the h-query for a value of σ *   1 , all of the vectors w j ≼ w i had already been used.In order to take advantage of the general forking lemma (Lemma 1), B picks vectors w i , w i+1 , . . ., w q h independently and uniformly at random from √ r} and modifies W h in such a way that the vectors w 1 , w 2 , . . ., w i−1 are kept, whereas q−i+1 of the remaining vectors are replaced by the newly-generated vectors w's.
After this update, the set of answers to h-queries is of the following form W h = {w 1 , . . ., w i−1 , w i , w i+1 , . . ., w q h }.
Further, B runs the subroutine A again with the same parameters (params) and the same random tape ρ as in the first run, but it uses W h instead of W h to answer A's h-queries.By Lemma ), H( msg * , r * ) .In addition, the probability P 1 that satisfies w i ̸ = w i is: where τ = Adv fu-cma Π,n (A).Before asking the i-th h-query, B uses the same inputs, random tape ρ and w 1 , . . ., w i−1 to generate A's inputs, random tape and responses to h-queries.This implies that the two executions of A are identical up to the i-th h-query which, in turn means that the arguments of both i-th h-queries must be the same.Therefore: This implies that: Therefore by (10), if x 0 ̸ = 0 then it is a solution of the following SIS q,n,(1+ℓ)m,β problem A t * x 0 ≡ 0 (mod q) with ∥x 0 ∥ β .

This means that the probability that σ
1 ) ̸ = 0 still remains to be estimated.To this end, let Having done this, matrix E ′ * may be created, such that all its columns, except for the column j 0 , are the same as E * , i.e.E ′ * = {e ′ j }, where e ′ j = E * [j], for j ̸ = j 0 and e ′ j0 = e ′ .Now, one may see that with this definition of E ′ * if: In other words, we have showed that for every matrix E * satisfying (11), with probability at least 1 − 2 −101 , it holds that there exists an another matrix E ′ * which differs form E * only in column j 0 , and such that ( 12) is satisfied and that A t * E * = A t * E ′ * .These, in turn, mean that the likelihood of choosing between E * and E ′ * is at least 1/2.Obviously, both these matrices are statistically indistinguishable to adversary A.
Eventually, B outputs vector x.Note that: S • x = A t * x 0 ≡ 0 (mod q) , where ∥x 0 ∥ β = (2s 2 + 2s 0 √ r) (ℓ + 1)m.This leads to the conclusion that the probability of getting a solution to SIS q,n,(1+2ℓ)m,β is the same as the probability of an event that x ̸ = 0. From inequalities (9), (13) and the fact that the probability of B not aborting is exactly equal to 1/T 2 , we have: where τ = Adv fu-cma Π,n (A).This finishes the proof.
Remark 31.It is easy to see that the left-hand side of the inequality (8) can be approximated by Adv fu-cma Π,n (A) 2 2q h T 2 .This shows that the wording of Theorem 27 can be reformulated in such a way that if A is a fu-cma-adversary attacking Π in the random oracle model, which makes at most q h queries to the random oracle, then there exists a PPT algorithm B attacking SIS q,n,(1+2ℓ)m,β problem with the following advantage:

Parameters
In Table 1 summarizes the parameters of the proposed forward-secure digital signature scheme.The three independent values n, k, q need to be chosen in such a way to guarantee that the SIS problem is computationally infeasible.The basic idea of solving SIS problem is to define a random q-ary lattice L ⊥ q (A) and use lattice reduction algorithms to find short vectors in this lattice.Paper [21] shows that the length of the vector obtained by running the best known algorithms on a random m-dimensional q-ary lattice L ⊥ q (A) is close to min q, det L ⊥ q (A) 1/m • δ m = min q, q n/m • δ m , where the equality holds with overwhelming probability.Parameter δ, called a Hermite factor, depends on the quality of the lattice-reduction algorithm being used.It is conjectured in [22] that δ = 1.007 may be outside our reach for the foreseeable future.
It is worth noticing that although article [22] was published in 2011, the research still seems to be valid.Based on the results from [22], Micciancio and Regev observed, in [23], that since 2 2• √ n lg q lg δ is the minimum of a function m → q n/m • δ m for m = n lg q/ lg δ, lattice reduction algorithms can output the shortest vectors of L ⊥ q (A) when m ≈ n lg q/ lg δ.For lower m, the lattice is too sparse and does not contain short enough vectors.For larger m values, the high dimension prevents lattice reduction algorithms from finding short vectors.To sum up, the authors concluded that the length of the shortest vector one can find in L ⊥ q (A) for a random A ∈ Z n×m q using lattice reduction algorithms is of length at least min q, 2 2• √ n lg q lg δ .
We wish to emphasize that the s 0 depends on T * A ℓ and it is chosen after getting (A ℓ , T A ℓ ) ← TrapGen(q, n).As far as parameters s 1 and s 2 are concerned, two options are available.They may either be set together with s 0 , or they may play the role of one-time parameters and be renewed while creating a new signature.The same remark applies to a parameter d which depends on s 0 .
This can be a bit confusing, since due to the definition of m, a value of d ought to be known in advance, before setting the value of m.To be more precise, in addition to the primary assumption concerning m, according to which it must be at least ⌈6n lg q⌉, Lemma 17 enforces m > 1 ℓ+1 • 24 + (n lg q) lg (2d+1) .
However, the left-hand side of the latter condition can exceed the value of ⌈6n lg q⌉ only for very small n, q and d, meaning that, in real instances of the scheme, we always have that max ⌈6n lg q⌉ , 1 ℓ+1 • 24 + (n lg q) lg (2d+1) = ⌈6n lg q⌉.
Therefore, the dependence of m on d is only of theoretical importance and it can be neglected while determining a set of parameters.

, 1 2 m
and it contains exactly one representative x of every coset x+L.For a lattice L having basis B, a commonly used fundamental domain is the origin-centered fundamental parallelepiped P(B) = B • −1  2 , where a coset x + L has a representative x − B • ⌊B −1 • x⌉.The measure of fundamental parallelepiped can be easily computed as vol P(B) = (det(B • B T ))1/2

1 )
In order to generate a representative function with trapdoor the TrapGen algorithm from Theorem 11 is used.It outputs A ∈ Z n×m q

Fig. 1 .
Fig. 1.Visualization of some basic ideas standing behind the scheme.

Fig. 2 .
Fig. 2. Content of a secure key associated with a period t.

5 )
Create an empty stack Stack, and next invoke Stack.push(T A ℓ , ℓ) in order to initialize the stack with (T A ℓ , ℓ), 6) Launch the StackFilling (params, Stack, t = 0) algorithm to be given (scp, Stack) (after making this step, Stack consists of ℓ elements), 7) KGen outputs the initial secret key sk 0 = (scp, Stack, t = 0) and the public key pk ← then it is obvious that t i = t * i for every i > i 0 .Since the adversary B knows A 1, A outputs a new forgery t * , msg * , σ * = ( σ * 1 , σ * 2 , r * ) using the same h-queries.If t * ̸ = t * , B aborts.Otherwise B accepts the forgery and then this means that ∥ σ * 2 ∥ s 2 (ℓ + 1)m and w