1 Introduction

In this paper, we consider a harmonizable operator scaling \(\alpha \)-stable random sheet as introduced in [11]. The main idea is to combine the properties of operator scaling \(\alpha \)-stable random fields and fractional Brownian sheets in order to obtain a more general class of random fields. Let us recall that a scalar valued random field \(\{ X(x) : x \in {\mathbb {R}}^d \}\) is said to be operator scaling for some matrix \(E \in {\mathbb {R}}^{d \times d}\) and some \(H>0\) if

$$\begin{aligned} \{ X( c^Ex) : x \in {\mathbb {R}}^d \} {\mathop {=}\limits ^\mathrm{f.d.}} \{ c^H X(x) : x \in {\mathbb {R}}^d \} \quad \text {for all } c>0, \end{aligned}$$
(1.1)

where \({\mathop {=}\limits ^\mathrm{f.d.}}\) means equality of all finite-dimensional marginal distributions and, as usual, \(c^E = \sum _{k=0}^{\infty } \frac{(\log c)^k}{k!} E^k\) is the matrix exponential. These fields can be regarded as an anisotropic generalization of self-similar random fields (see, e.g., [8]), whereas the fractional Brownian sheet \(\{ B_{H_1, \ldots , H_d} (x) : x \in {\mathbb {R}}^d \}\) with Hurst indices \(0<H_1, \ldots , H_d <1\) can be seen as an anisotropic generalization of the well-known fractional Brownian field (see, e.g., [13]) and satisfies the scaling property

$$\begin{aligned}&\{ B_{H_1, \ldots , H_d} (c_1x_1, \ldots , c_dx_d) : x =(x_1, \ldots , x_d) \in {\mathbb {R}}^d \} \\&\quad {\mathop {=}\limits ^\mathrm{f.d.}} \{ c_1^{H_1} \ldots c_d^{H_d} B_{H_1, \ldots , H_d} (x) : x \in {\mathbb {R}}^d \} \end{aligned}$$

for all constants \(c_1, \ldots , c_d >0\). See [3, 10, 27] and the references therein for more information on the fractional Brownian sheet.

Throughout this paper, let \(d= \sum _{j=1}^m d_j\) for some \(m \in {\mathbb {N}}\) and \({\tilde{E}}_j \in {\mathbb {R}}^{d_j \times d_j}\), \(j=1, \ldots , m\) be matrices with positive real parts of their eigenvalues. We define matrices \(E_1, \ldots , E_m \in {\mathbb {R}}^{d \times d}\) as

$$\begin{aligned} E_j = \left( \begin{array}{ccccccccccccccccccccccc} 0 &{} &{} &{} &{} &{} &{} 0 \\ &{} \ddots &{} &{} &{} &{} \\ &{} &{} 0 &{} &{} &{} \\ &{} &{} &{} {\tilde{E}}_j &{} &{} &{} \\ &{} &{} &{} &{} 0 &{} &{} \\ &{} &{} &{} &{} &{} \ddots &{} \\ 0 &{} &{} &{} &{} &{} &{} 0 \end{array} \right) . \end{aligned}$$

Further, we define the block diagonal matrix \(E \in {\mathbb {R}}^{d \times d}\) as

$$\begin{aligned} E = \sum _{j=1}^m E_j = \left( \begin{array}{ccccccccccccccccccccccc} {\tilde{E}}_1 &{} &{} 0 \\ &{} \ddots &{} \\ 0 &{} &{} {\tilde{E}}_m \end{array} \right) . \end{aligned}$$

In analogy to the terminology in [11, Definition 1.1.1], a random field \(\{ X(x) : x \in {\mathbb {R}}^d \}\) is called operator scaling stable random sheet if for some \(H_1, \ldots , H_m >0\) we have

$$\begin{aligned} \{ X( c^{E_j}x) : x \in {\mathbb {R}}^d \} {\mathop {=}\limits ^\mathrm{f.d.}} \{ c^{H_j} X(x) : x \in {\mathbb {R}}^d \} \end{aligned}$$
(1.2)

for all \(c>0\) and \(j =1, \ldots , m\). Note that, by applying (1.2) iteratively, any operator scaling stable random sheet is also operator scaling for the matrix E and the exponent \(H= \sum _{j=1}^m H_j\) in the sense of (1.1). Further, note that this definition is indeed a generalization of operator scaling random fields, since for \(m=1, d =d_1\) and \(E=E_1 = {\tilde{E}}_1\) (1.2) coincides with the definition introduced in [4]. Another example of a random field satisfying (1.2) is given by the fractional Brownian sheet, where \(E_j=d_j=1\) for \(j =1, \ldots , m\) in this case. Operator scaling stable random sheets have been proven to be quite flexible in modeling physical phenomena and can be applied in order to extend the well-known Cahn–Hilliard phase-field model. See [1] and the references therein for more information.

Random fields satisfying a scaling property such as (1.1) or (1.2) are very popular in modeling, see [14, 22] and the references in [5] for some applications. Most of these fields are Gaussian. However, Gaussian fields are not always flexible for example in modeling heavy tail phenomena. For this purpose, \(\alpha \)-stable random fields have been introduced. See [17] for a good introduction to \(\alpha \)-stable random fields.

Using a moving average and a harmonizable representation, the authors in [4] defined and analyzed two different classes of symmetric \(\alpha \)-stable random fields satisfying (1.1). Following the outline in [4, 5], these two classes were generalized to random fields satisfying (1.2) in [11]. The fields constructed in [4] have stationary increments, i.e., they satisfy

$$\begin{aligned} \{X( x+h) - X(h) : x \in {\mathbb {R}}^d \} {\mathop {=}\limits ^\mathrm{f.d.}} \{X( x) : x \in {\mathbb {R}}^d \} \quad \text {for all } h \in {\mathbb {R}}^d . \end{aligned}$$

This property has been proven to be quite useful in studying the sample path properties. However, the property of stationary increments is no more true for the fields constructed in [11]. The absence of this property is one of the challenging difficulties we face in determining results about their sample paths.

Another main tool in studying sample paths of operator scaling stable random sheets are polar coordinates with respect to the matrices \(E_j, j=1, \ldots , m\), introduced in [16] and used in [4, 5]. If \(\{X( x) : x \in {\mathbb {R}}^d \}\) is an operator scaling symmetric \(\alpha \)-stable random sheet with \(\alpha = 2\), using (1.2), one can write the variance of \(X( x), x \in {\mathbb {R}}^d\), as

$$\begin{aligned} {\mathbb {E}} [X^2 (x) ] = \tau _{E} (x) ^{2H} {\mathbb {E}} [X^2 \big ( l_{E} (x) \big ) ], \end{aligned}$$

where \(H= \sum _{j=1}^m H_j\) and \(\tau _{E} (x)\) is the radial part of x with respect to E and \(l_{E} (x)\) is its polar part. Therefore, if the random field has stationary increments in the Gaussian case information about the behavior of the polar coordinates \(\big ( \tau _{{\tilde{E}}_j} (x), l_{{\tilde{E}}_j} (x) \big )\) contains information about the sample path regularity. This property also holds in the stable case \(\alpha \in (0,2)\). Moreover, this also remains to be true for operator scaling random sheets which do not have stationary increments but satisfy a slightly weaker property, see Corollary 3.3 below.

This paper is organized as follows. In Sect. 2, we introduce the main tools we need for the study in this paper. Section 2.1 is devoted to a spectral decomposition result from [16]. Section 2.2 is about the change to polar coordinates with respect to scaling matrices and we establish a relation between the radius \(\tau _{E} (x)\) and the radii \(\tau _{{\tilde{E}}_j} (x)\), \(1 \le j \le m\), in Lemma 2.2 below. In Sect. 3, we present the results in [11] about the existence of harmonizable and moving average representations of operator scaling \(\alpha \)-stable random sheets. Here, we will only focus on a harmonizable representation. Moreover, we prove that these random sheets fulfill a generalized type of modulus of continuity, which is deduced by showing the applicability of results in [5, 6]. Based on this and generalizing a combination of methods used in [2, 4, 5, 24], in Sect. 4 we present our results on the Hausdorff dimension and box-counting dimension of the graph of harmonizable operator scaling stable random sheets.

2 Preliminaries

2.1 Spectral Decomposition

Let \(A \in {\mathbb {R}}^{d \times d}\) be a matrix with p distinct positive real parts of its eigenvalues \(0< a_1< \cdots < a_p\) for some \(p \le d\). Factor the minimal polynomial of A into \(f_1, \ldots , f_p\), where all roots of \(f_i\) have real part equal to \(a_i\), and define \(V_{i}={\text {Ker}}\big ( f_{i}(A) \big )\). Then, by [16, Theorem 2.1.14],

$$\begin{aligned} {{\mathbb {R}}^d}=V_{1}\oplus \cdots \oplus V_{p} \end{aligned}$$

is a direct sum decomposition, i.e. we can write any \(x \in {\mathbb {R}}^d\) uniquely as

$$\begin{aligned} x = x_1 + \cdots + x_p \end{aligned}$$

for \(x_i \in V_i\), \(1 \le i \le p\). Further, we can choose an inner product on \({{\mathbb {R}}^d}\) such that the subspaces \(V_1, \ldots , V_p\) are mutually orthogonal. Throughout this paper, for any \(x \in {{\mathbb {R}}^d}\) we will choose \( \Vert x \Vert = \langle x,x\rangle ^{1/2}\) as the corresponding Euclidean norm. In view of our methods this will entail no loss of generality, since all norms are equivalent.

2.2 Polar Coordinates

We now recall the results about the change to polar coordinates used in [4, 5]. As before, let \(A \in {\mathbb {R}}^{d \times d}\) be a matrix with distinct positive real parts of its eigenvalues \(0< a_1< \cdots < a_p\) for some \(p \le d\). According to [4, Sect. 2] there exists a norm \(\Vert \cdot \Vert _A\) on \({{\mathbb {R}}^d}\) such that for the unit sphere \(S_A = \{ x \in {{\mathbb {R}}^d}: \Vert x \Vert _A = 1 \}\) the mapping \(\Psi _A : (0, \infty ) \times S_A \rightarrow {{\mathbb {R}}^d}\setminus \{0 \}\) defined by \(\Psi _A (r, \theta ) = r^A \theta \) is a homeomorphism. To be more precise, the norm \(\Vert \cdot \Vert _A\) is defined by

$$\begin{aligned} \Vert x \Vert _A = \int _0^1 \Vert t^A x\Vert \frac{\mathrm{d}t}{t}, \quad x \in {{\mathbb {R}}^d}. \end{aligned}$$
(2.1)

Thus, we can write any \(x \in {{\mathbb {R}}^d}\setminus \{0 \}\) uniquely as

$$\begin{aligned} x = \tau _A (x)^A l_A (x) , \end{aligned}$$
(2.2)

where \(\tau _A (x) >0\) is called the radial part of x with respect to A and \(l_A(x) \in \{ x \in {{\mathbb {R}}^d}: \tau _A (x) = 1 \}\) is called the direction. It is clear that \(\tau _A (x) \rightarrow \infty \) as \(\Vert x\Vert \rightarrow \infty \) and \(\tau _A (x) \rightarrow 0\) as \(\Vert x\Vert \rightarrow 0\). Further, one can extend \(\tau _A( \cdot )\) continuously to \({{\mathbb {R}}^d}\) by setting \(\tau _A(0) = 0\). Note that, by (2.2), it is straightforward to see that \(\tau _A( \cdot )\) satisfies

$$\begin{aligned} \tau _A( c^A x) = c \cdot \tau _A(x) \quad \text {for all }\, c>0. \end{aligned}$$

Such functions are called A-homogeneous.

Let us recall a result about bounds on the growth rate of \(\tau _A( \cdot )\) in terms of \(a_1, \ldots , a_p\) established in [4, Lemma 2.1].

Lemma 2.1

Let \(\varepsilon >0\) be small enough. Then, there exist constants \(K_1, \ldots , K_4 >0\) such that

$$\begin{aligned} K_1 \Vert x \Vert ^{\frac{1}{a_1}+\varepsilon } \le \tau _A(x) \le K_2 \Vert x \Vert ^{\frac{1}{a_p}-\varepsilon } \end{aligned}$$

for all x with \(\tau _A(x) \le 1\), and

$$\begin{aligned} K_3 \Vert x \Vert ^{\frac{1}{a_p}-\varepsilon } \le \tau _A(x) \le K_4 \Vert x \Vert ^{\frac{1}{a_1}+\varepsilon } \end{aligned}$$

for all x with \(\tau _A(x) \ge 1.\)

We remark that the bounds on the growth rate of \(\tau _A (\cdot )\) have been improved in [5, Proposition 3.3], but the bounds given in Lemma 2.1 suffice for our purposes.

The following Lemma will be needed in the next section in order to give an upper bound on the modulus of continuity.

Lemma 2.2

Let \(E, {\tilde{E}}_1, \ldots , {\tilde{E}}_m\) be as above. Then, there exists a constant \(C \ge 1\) such that

$$\begin{aligned} C^{-1} \sum _{j=1}^m \tau _{{\tilde{E}}_j} (x_j) \le \tau _E(x) \le C \sum _{j=1}^m \tau _{{\tilde{E}}_j} (x_j) \end{aligned}$$

for any \(x = (x_1, \ldots , x_m ) \in {\mathbb {R}}^{d_1} \times \cdots \times {\mathbb {R}}^{d_m} = {{\mathbb {R}}^d}\).

Proof

Let \({\mathbb {R}}^{{\bar{d}}_j} := \{ 0\} \times \cdots \times \{0\} \times {\mathbb {R}}^{d_j} \times \{0\} \times \cdots \times \{0\} \subset {{\mathbb {R}}^d}\), \(1 \le j \le m\), be a subspace and note that

$$\begin{aligned} {{\mathbb {R}}^d}= {\mathbb {R}}^{{\bar{d}}_1} \oplus \cdots \oplus {\mathbb {R}}^{{\bar{d}}_m} \end{aligned}$$

is a direct sum decomposition with respect to E. Throughout, write \(x = (x_1, \ldots , x_m ) = {\bar{x}}_1 + \cdots + {\bar{x}}_m\) with respect to this decomposition. From [15, Lemma 2.2], we have for some \(c \ge 1\)

$$\begin{aligned} \frac{1}{c} \sum _{i=1}^m \tau _E ({\bar{x}}_i) \le \tau _E(x) \le c \sum _{i=1}^m \tau _E ({\bar{x}}_i). \end{aligned}$$

It remains to prove \(\tau _E ({\bar{x}}_i) = \tau _{{\tilde{E}}_i} (x_i)\) for \(1 \le i \le m\). Without loss of generality assume \(i=1\) and for simplicity in this proof let us assume that \(m=2\). Thus, for any vector \(x\in {{\mathbb {R}}^d}\) let us write \(x = (x_1, x_2) \in {\mathbb {R}}^{d_1} \times {\mathbb {R}}^{d_2}.\) Note that by definition

$$\begin{aligned} (x_1,0)&= \tau _E (x_1,0)^E l_E(x_1,0) = \Big ( \tau _E (x_1,0)^{{\tilde{E}}_1} l_E(x_1,0)_1, \tau _E (x_1,0)^{{\tilde{E}}_2} l_E(x_1,0)_2 \Big ) \\&= \Big ( \tau _E (x_1,0)^{{\tilde{E}}_1} l_E(x_1,0)_1, 0 \Big ) , \end{aligned}$$

where we used the notation \(l_E(x) = \big ( l_E(x)_1, l_E(x)_2 \big ) \in {\mathbb {R}}^{d_1} \times {\mathbb {R}}^{d_2}.\) But on the other hand one can write

$$\begin{aligned} x_1 = \tau _{{\tilde{E}}_1} (x_1)^{{\tilde{E}}_1} l_{{\tilde{E}}_1} (x_1) \end{aligned}$$

yielding that

$$\begin{aligned} \tau _{{\tilde{E}}_1} (x_1)^{{\tilde{E}}_1} l_{{\tilde{E}}_1} (x_1) = \tau _E (x_1,0)^{{\tilde{E}}_1} l_E(x_1,0)_1. \end{aligned}$$

Further noting that

$$\begin{aligned} l_E(x_1,0) = \big ( l_E(x_1,0)_1, l_E(x_1,0)_2 \big ) = \big ( l_E(x_1,0)_1, 0 \big ) \end{aligned}$$

and taking into account the definition of the norm \(\Vert \cdot \Vert _{{\tilde{E}}_1}\) given in (2.1) we obtain

$$\begin{aligned} \Vert l_{{\tilde{E}}_1} (x_1) \Vert _{{\tilde{E}}_1}&= 1 = \Vert \Big ( l_{E} (x_1,0)_1 , 0 \Big ) \Vert _{E} = \int _0^1 \Vert t^E \Big ( l_{E} (x_1,0)_1 , 0 \Big ) \Vert \frac{\mathrm{d}tt}{t} \\&= \int _0^1 \Vert t^{{\tilde{E}}_1} l_{E} (x_1,0)_1 \Vert \frac{\mathrm{d}tt}{t} = \Vert l_{E} (x_1,0)_1 \Vert _{{\tilde{E}}_1} \end{aligned}$$

Thus, by the uniqueness of the representation we have \(\tau _{{\tilde{E}}_1} (x_1) = \tau _{E} (x_1,0)\) and \(l_{{\tilde{E}}_1} (x_1) = l_{E} (x_1,0)_1\) as desired. This concludes the proof. \(\square \)

Corollary 2.3

Let \(E, {\tilde{E}}_1, \ldots , {\tilde{E}}_m\) be as above. Then, there exists a constant \(C \ge 1\) such that

$$\begin{aligned} C^{-1} \sum _{j=1}^m \tau _{{\tilde{E}}_j} (x_j)^H \le \tau _E(x)^H \end{aligned}$$

for any \(H>0\) and \(x = (x_1, \ldots , x_m ) \in {\mathbb {R}}^{d_1} \times \cdots \times {\mathbb {R}}^{d_m} = {{\mathbb {R}}^d}\).

3 Harmonizable Operator Scaling Random Sheets

We consider harmonizable operator scaling stable random sheets defined in [11] and present some related results established in [11]. Most of these will also follow from the results derived in [4, 5]. Throughout this paper, for \(j =1, \ldots , m\) assume that the real parts of the eigenvalues of \({\tilde{E}}_j\) are given by \(0< a_1^j< \cdots < a_{p_j}^j\) for some \(p_j \le d_j\). Let \(q_j = {{\,\mathrm{trace}\,}}({\tilde{E}}_j ) \). Suppose that \(\psi _j : {\mathbb {R}}^{d_j} \rightarrow [0, \infty ) \) are continuous \({\tilde{E}}_j^T\)-homogeneous functions, which means according to [4, Definition 2.6] that

$$\begin{aligned} \psi _j ( c^{{\tilde{E}}_j^T} x) = c \psi _j (x) \quad \text {for all } c>0. \end{aligned}$$

Moreover, we assume that \(\psi _j(x) \ne 0\) for \(x \ne 0\). See [4, 5] for various examples of such functions.

Let \(0 < \alpha \le 2\) and \(W_{\alpha } (d \xi )\) be a complex isotropic symmetric \(\alpha \)-stable random measure on \({{\mathbb {R}}^d}\) with Lebesgue control measure (see [17, Chaper 6.3]).

Theorem 3.1

For any vector \(x \in {{\mathbb {R}}^d}\) let \(x= (x_1, \ldots , x_m) \in {\mathbb {R}}^{d_1} \times \cdots \times {\mathbb {R}}^{d_m} = {{\mathbb {R}}^d}\). The random field

$$\begin{aligned} X_{\alpha } (x) = {\text {Re}} \int _{{{\mathbb {R}}^d}} \prod _{j=1}^m ( e^{i \langle x_j, \xi _j \rangle } -1 ) \psi _j (\xi _j) ^{-H_j - \frac{q_j}{\alpha }} W_{\alpha } (d \xi ) , \quad x \in {{\mathbb {R}}^d}\end{aligned}$$
(3.1)

exists and is stochastically continuous if and only if \(H_j \in (0, a_1^j)\) for all \(j=1, \ldots , m\).

Proof

This result has been proven in detail in [11], but it also follows as an easy consequence of [4, Theorem 4.1]. By the definition of stable integrals (see [17]), \(X_\alpha (x)\) exists if and only if

$$\begin{aligned} \Gamma _\alpha (x) = \int _{{{\mathbb {R}}^d}} \prod _{j=1}^m | e^{i \langle x_j, \xi _j \rangle } -1 | ^\alpha \psi _j (\xi _j) ^{-\alpha H_j - q_j} d \xi < \infty , \end{aligned}$$

but this is equivalent to

$$\begin{aligned} \Gamma ^j_\alpha (x) = \int _{{\mathbb {R}}^{d_j}} | e^{i \langle x_j, \xi _j \rangle } -1 | ^\alpha \psi _j (\xi _j) ^{-\alpha H_j - q_j} d \xi _j < \infty , \end{aligned}$$

for all \(j=1, \ldots , m\). Since, in [4, Theorem 4.1], it is shown that \(\Gamma _\alpha ^j (x)\) is finite if and only if \(H_j \in (0, a_1^j)\) the statement follows, see [11] for details. The stochastic continuity can be deduced similarly as a consequence of [4, Theorem 4.1]. \(\square \)

Note that from (3.1) it follows that \(X_\alpha (x) = 0\) for all \(x= (x_1, \ldots , x_m) \in {\mathbb {R}}^{d_1} \times \cdots \times {\mathbb {R}}^{d_m} = {{\mathbb {R}}^d}\) such that \(x_j = 0\) for at least one \(j \in \{ 1, \ldots , m \}\).

The following result has been established in [11, Corollary 4.2.1]. The proof is carried out as the proof of [4, Corollary 4.2 (a)] via characteristic functions of stable integrals and by noting that \(c^{E_j} x = (x_1, \ldots , x_{j-1}, c^{{\tilde{E}}_j} x_j, x_{j+1}, \ldots , x_m )\) for all \(c>0\) and \(x = (x_1, \ldots , x_m ) \in {\mathbb {R}}^{d_1} \times \cdots \times {\mathbb {R}}^{d_m} = {{\mathbb {R}}^d}\).

Corollary 3.2

Under the conditions of Theorem 3.1, the random field \(\{ X_\alpha (x) : x \in {{\mathbb {R}}^d}\}\) is operator scaling in the sense of (1.2), that is, for any \(c>0\)

$$\begin{aligned} \{ X (c^{E_j} x) : x \in {{\mathbb {R}}^d}\} {\mathop {=}\limits ^\mathrm{f.d.}} \{ c^{H_j} X(x) : x \in {{\mathbb {R}}^d}\}. \end{aligned}$$
(3.2)

As we shall see below, fractional Brownian sheets fall into the class of random fields given by (3.1). It is known that a fractional Brownian sheet does not have stationary increments. Thus, in general, a random field given by (3.1) does not possess stationary increments. But it satisfies a slightly weaker property, as the following statement shows.

Corollary 3.3

Under the conditions of Theorem 3.1, for any \(h \in {\mathbb {R}}^{d_j}\), \(j =1, \ldots , m\)

$$\begin{aligned}&\{ X_\alpha (x_1, \ldots , x_{j-1}, x_j +h, x_{j+1}, \ldots , x_m ) - X_\alpha (x_1, \ldots , x_{j-1}, h, x_{j+1}, \ldots , x_m ) : x \in {{\mathbb {R}}^d}\} \\&\qquad {\mathop {=}\limits ^\mathrm{f.d.}} \{ X_\alpha (x ) : x \in {{\mathbb {R}}^d}\}, \end{aligned}$$

where we used the notation \(x = (x_1, \ldots , x_m ) \in {\mathbb {R}}^{d_1} \times \cdots \times {\mathbb {R}}^{d_m} = {{\mathbb {R}}^d}\).

Proof

This result has been established in [11, Corollary 4.2.2] and is proven similarly to [4, Corollary 4.2 (b)]. \(\square \)

As an easy consequence of the results in this paper, we will derive global Hölder critical exponents of the random fields defined in (3.1). Following [7, Definition 5], \(\beta \in (0,1)\) is said to be the Hölder critical exponent of the random field \(\{ X(x) : x \in {{\mathbb {R}}^d}\}\), if there exists a modification \(X^*\) of X such that for any \(s \in (0, \beta )\) the sample paths of \(X^*\) satisfy almost surely a uniform Hölder condition of order s on any compact set \(I \subset {{\mathbb {R}}^d}\), i.e., there exists a positive and finite random variable Z such that almost surely

$$\begin{aligned} | X^*(x) - X^*(y) | \le Z \Vert x - y \Vert ^s \quad \text {for all } x,y \in I, \end{aligned}$$
(3.3)

whereas, for any \(s \in ( \beta ,1)\), (3.3) almost surely fails.

Let us now state our main result of this section. Note that under the assumption \(H_j < a^1_j\) and up to considering matrices \({\bar{E}}_j = \frac{E_j}{H_j}\) instead of \(E_j\) in (1.2), \(1 \le j \le m\), and with the observation that

$$\begin{aligned} c_1^j \tau _{{\tilde{E}}_j} (x_j)^{H_j} \le \tau _{\frac{{\tilde{E}}_j}{H_j}} (x_j) \le c_2^j \tau _{{\tilde{E}}_j} (x_j)^{H_j}, \quad \forall x_j \in {\mathbb {R}}^{d_j} \end{aligned}$$

for some positive and finite constants \(c_1^j, c_2^j\) as noted in [6, Remark 5.1], without loss of generality we will assume \(H_j=1<a^1_j\) in the proof of the following statement. We will make this assumption for notational convenience.

Proposition 3.4

Under the above assumptions and the assumption that \(H_j=1\) or, equivalently \(a_1^j>1\) for \(j=1, \ldots , m\) there exists a modification \(X_\alpha ^*\) of the random field in (3.1) such that for any \(\varepsilon >0\) and any non-empty compact set \(G_d \subset {{\mathbb {R}}^d}\)

$$\begin{aligned} \sup _{\begin{array}{c} x,y \in G_d\\ x \ne y \end{array}} \frac{|X_\alpha ^*(x)-X_\alpha ^*(y)|}{\sum _{j=1}^m\tau _{{\tilde{E}}_j}(x_j-y_j)^{H_j} \big [ \log \big ( 1 +\sum _{j=1}^n\tau _{{\tilde{E}}_j}(x_j-y_j)^{-1} \big ) \big ]^{\frac{1}{2}} } < \infty \quad a.s. \end{aligned}$$

if \(\alpha =2\) and

$$\begin{aligned} \sup _{\begin{array}{c} x,y \in G_d\\ x \ne y \end{array}} \frac{|X_\alpha ^*(x)-X_\alpha ^*(y)|}{\sum _{j=1}^m\tau _{{\tilde{E}}_j}(x_j-y_j)^{H_j} \big [ \log \big ( 1 +\sum _{j=1}^n\tau _{{\tilde{E}}_j}(x_j-y_j)^{-1} \big ) \big ]^{\varepsilon +\frac{1}{2} + \frac{1}{\alpha }} } < \infty \quad a.s. \end{aligned}$$

if \(\alpha \in (0,2)\), where we used the notation \(x=(x_1, \ldots , x_m) \in {\mathbb {R}}^{d_1} \times \cdots \times {\mathbb {R}}^{d_m} = {{\mathbb {R}}^d}\). In particular, for any \(0<\gamma <H_j\) and \(x=(x_1, \ldots , x_m), y=(y_1, \ldots , y_m) \in G_d\) one can find a positive and finite constant C such that

$$\begin{aligned} |X_\alpha ^*(x)-X_\alpha ^*(y)| \le C \sum _{j=1}^m\tau _{{\tilde{E}}_j}(x_j-y_j)^{\gamma } \end{aligned}$$
(3.4)

holds almost surely.

Proof

Let us first assume that \(\alpha =2\). In the following let \(\Vert \cdot \Vert _p\) denote the p-norm for \(p \ge 1\), c an unspecified positive constant, \(G_d \subset {{\mathbb {R}}^d}\) an arbitrary compact set, \(r>0\) and \(B_E (r) = \{ x \in {{\mathbb {R}}^d}: \tau _E (x) \le r\}\). Moreover, by

$$\begin{aligned} d_X(x,y) = {\mathbb {E}} [| X_2 (x) - X_2 (y) |^2]^{\frac{1}{2}}, \quad x,y \in {{\mathbb {R}}^d}, \end{aligned}$$

we denote the canonical metric associated to \(X_2\). We first show for \(x,y \in G_d\) that

$$\begin{aligned} d_X(x,y) \le c \tau _E(x-y) . \end{aligned}$$
(3.5)

By the equivalence of norms one can find a constant c such that

$$\begin{aligned} \frac{1}{c} \sum _{i=1}^m |u_i|^2 \le \Big ( \sum _{i=1}^m |u_i| \Big ) ^2 \le c \sum _{i=1}^m |u_i|^2 \end{aligned}$$

for any \(u \in {\mathbb {R}}^m\). Further let us remark that by definition the variance of the centered Gaussian random variable \(X_2(x)\) in (3.1) is given by

$$\begin{aligned} \Gamma ^2 (x) = {\mathbb {E}} [ X_2(x)^2] = c \int _{{\mathbb {R}}^d}\prod _{j=1}^m |e^{i \langle x_j, \xi _j \rangle }-1|^2 \psi _j(\xi _j)^{-2-q_j} d\xi . \end{aligned}$$

Note that for all \(1\le j \le m\) and \(x=(x_1, \ldots , x_m) \in G_d\) one can find a constant \(0<M (x)< \infty \) such that

$$\begin{aligned} \Gamma ^2 (x_1, \ldots , x_{j-1}, \theta , x_{j+1}, \ldots , x_n) \le M (x) \le \max _{x \in G_d} M(x) =: M \in (0, \infty ), \end{aligned}$$

where \(\theta \in {{\mathbb {R}}}^{d_j}\) with \(\tau _{{\tilde{E}}_j} (\theta ) = 1\). Using all this and the elementary inequality

$$\begin{aligned}&| X_2 (x) - X_2 (y) | \\&\quad \le \sum _{i=1}^m | X_2 (x_1, \ldots , x_{i-1}, x_i, y_{i+1}, \ldots , y_m) - X_2 (x_1, \ldots , x_{i-1}, y_i, y_{i+1}, \ldots , y_m) | \end{aligned}$$

with the convention that

$$\begin{aligned} X_2 (x_1, \ldots , x_{i-1}, y_i, y_{i+1}, \ldots , y_m) = X_2(y) \end{aligned}$$

for \(i=1\) and

$$\begin{aligned} X_2 (x_1, \ldots , x_{i-1}, x_i, y_{i+1}, \ldots , y_m) = X_2(x) \end{aligned}$$

for \(i=m\) we get for all \(x= (x_1, \ldots , x_m)\) and \(y= (y_1, \ldots , y_m) \in G_d\)

$$\begin{aligned}&{\mathbb {E}} [| X_2 (x) - X_2 (y) |^2] \\&\quad \le {\mathbb {E}} \Big [ \Big ( \sum _{i=1}^m | X_2 (x_1, \ldots , x_{i-1}, x_i, y_{i+1}, \ldots , y_m) - X_2 (x_1, \ldots , x_{i-1}, y_i, y_{i+1}, \ldots , y_m) | \Big ) ^2 \Big ] \\&\quad \le c {\mathbb {E}} \Big [ \sum _{i=1}^m | X_2 (x_1, \ldots , x_{i-1}, x_i, y_{i+1}, \ldots , y_m) - X_2 (x_1, \ldots , x_{i-1}, y_i, y_{i+1}, \ldots , y_m) | ^2 \Big ] \\&\quad = c {\mathbb {E}} \Big [ \Big ( \sum _{i=1}^m | X_2 (x_1, \ldots , x_{i-1}, x_i-y_i, y_{i+1}, \ldots , y_m) | \Big ) ^2 \Big ], \end{aligned}$$

where we used Corollary 3.3 in the equality and the equivalence of norms in the last inequality. Using the operator scaling property and the generalized polar coordinates for \(x_i-y_i\) we can further get an upper estimate of the last expression by

$$\begin{aligned}&c \sum _{i=1}^m \tau _{{\tilde{E}}_i} (x_i-y_i)^2 {\mathbb {E}} \Big [ | X_2 (x_1, \ldots , x_{i-1}, l_{{\tilde{E}}_i} (x_i-y_i), y_{i+1}, \ldots , y_m) | ^2 \Big ] \\&\quad \le cM \sum _{i=1}^m \tau _{{\tilde{E}}_i} (x_i-y_i)^2 \le cM \tau _E(x-y)^2, \end{aligned}$$

where we used Corollary 2.3 with \(H=2\) in the last inequality, which proves (3.5). Now define an auxiliary Gaussian random field \( Y = \{ Y(t,s) : t \in G_d , s \in B_E(r)\}\) by

$$\begin{aligned} Y(t,s) = X_2 (t+s) - X_2 (t), \quad t \in G_d , s \in B_E(r), \end{aligned}$$

where \(r>0\) is such that \( B_E(r) \subset G_d\). Denote by D the diameter of \(G_d \times B_E(r)\) in the metric \(d_Y\) associated with Y. Then, using (3.5) it is easy to see that \(D \le c r \) for some positive constant c. Using the latter inequality, by the arguments made in the proof of [15, Theorem 4.2] if \(N(\varepsilon )\) denotes the smallest number of open \(d_Y\)-balls of radius \(\varepsilon >0\) needed to cover \(G_d \times B_E(r)\) we obtain that

$$\begin{aligned} \int _0^D \sqrt{ \log N(\varepsilon ) } d\varepsilon \le cr \sqrt{ \log (1+r^{-1}) } . \end{aligned}$$

Then, it follows from [21, Lemma 2.1] that for all \(u \ge 2cr \sqrt{ \log (1+r^{-1}) }\)

$$\begin{aligned} P \Big ( \sup _{(t,s) \in G_d \times B_E(r)} | X_2 (t+s) - X_2(t) | \ge u \Big ) \le \exp \Big ( - \frac{u^2}{4D^2} \Big ). \end{aligned}$$

Therefore, by a standard Borel-Cantelli argument we conclude

$$\begin{aligned} \sup _{\begin{array}{c} x,y \in G_d\\ x \ne y \end{array}} \frac{|X_2^*(x)-X_2^*(y)|}{\tau _E(x-y) \sqrt{ \log \big ( 1+ \tau _E(x-y) ^{-1} \big ) } }< \infty \quad a.s. \end{aligned}$$
(3.6)

for a continuous modification \(X_2^*\) of \(X_2\), which by Lemma 2.2 is equivalent to

$$\begin{aligned} \sup _{\begin{array}{c} x,y \in G_d\\ x \ne y \end{array}} \frac{|X_\alpha ^*(x)-X_\alpha ^*(y)|}{\sum _{j=1}^m\tau _{{\tilde{E}}_j}(x_j-y_j)^{H_j} \big [ \log \big ( 1 +\sum _{j=1}^n\tau _{{\tilde{E}}_j}(x_j-y_j)^{-1} \big ) \big ]^{\frac{1}{2}} } < \infty \quad a.s. \end{aligned}$$

Let us now assume that \(\alpha \in (0,2)\). In this case, the proof is a slight modification and extension of the proof of [6, Proposition 5.1] and the idea is to check the assumptions (i), (ii) and (iii) of Proposition 4.3 of the latter reference. Throughout this proof, we let c be a universal unspecified positive and finite constant and in the following let

$$\begin{aligned} f_\alpha (u, \xi ) = \prod _{j=1}^m ( e^{i \langle u_j, \xi _j \rangle } -1 ) \psi _\alpha (\xi ) \end{aligned}$$

with

$$\begin{aligned} \psi _\alpha (\xi )= \prod _{j=1}^m \psi _j (\xi _j)^{-1-\frac{q_j}{\alpha }}. \end{aligned}$$

As in [6, Example 5.1] one checks that for all \(\xi \in {{\mathbb {R}}^d}\), \(\xi _j \ne 0\),

$$\begin{aligned} \psi _\alpha (\xi ) \le c\prod _{j=1}^m \tau _{{\tilde{E}}^T_j} (\xi _j)^{-1-\frac{q_j}{\alpha }} \end{aligned}$$
(3.7)

and, in particular there exist constants \(A_j >0\), \(1\le j \le m\), such that (3.7) holds for all \(\Vert \xi _j\Vert > A_j\). For \(\zeta >0\) chosen arbitrarily small we consider the function \({\tilde{\mu }}\) on \({{\mathbb {R}}^d}\) given by \({\tilde{\mu }} (\xi ) = \prod _{j=1}^m {\tilde{\mu }}_j (\xi _j)\) with

$$\begin{aligned} {\tilde{\mu }}_j (\xi _j) = \Big ( \Vert \xi _j\Vert + 1\Big )^\alpha \mathbb {1}_{ \big \{ \Vert \xi _j\Vert \le A_j \big \} } + \tau _{{\tilde{E}}^T_j} (\xi _j)^{-q_j} |\log \tau _{{\tilde{E}}^T_j} (\xi _j)|^{-1-\zeta } \mathbb {1}_{ \big \{ \Vert \xi _j\Vert > A_j \big \} }. \end{aligned}$$

We observe that \({\tilde{\mu }}\) is positive on \({{\mathbb {R}}^d}\setminus \{ 0\}\) and, similarly to the calculations made in the proof of [6, Proposition 5.1], we obtain that

$$\begin{aligned} \int _{{\mathbb {R}}^{d_j}} {\tilde{\mu }}_j (\xi _j) d\xi _j = c \in (0, \infty ), \quad \forall 1\le j \le m. \end{aligned}$$

Define \(\mu _j = \frac{\tilde{\mu _j}}{c}\). Moreover, note that

$$\begin{aligned} \int _{{\mathbb {R}}^{d}} {\tilde{\mu }} (\xi ) d\xi = c \in (0, \infty ). \end{aligned}$$

Hence, \(\mu = \frac{{\tilde{\mu }}}{c}\) is well defined and now, as in the proof of [6, Proposition 5.1], we are going to check the assumptions (i), (ii) and (iii) of [6, Proposition 4.3] for

$$\begin{aligned} V_1 (u) = f_\alpha (u, \Xi ) \mu ( \Xi ) ^{-\frac{1}{\alpha }}, \end{aligned}$$

where \(u \in {{\mathbb {R}}^d}\) and \(\Xi \) is assumed to be a random vector on \({{\mathbb {R}}^d}\) with density \(\mu \).

We choose a constant \(c \in (0, \infty )\) such that

$$\begin{aligned} \Big | \prod _{j=1}^m (e^{i \langle x_j, \xi _j \rangle } -1) - \prod _{j=1}^m (e^{i \langle y_j, \xi _j \rangle } -1) \Big | \le \prod _{j=1}^m c \Vert \tau _E(x-y)^{{\tilde{E}}_j^T} \xi _j \Vert _{{\tilde{E}}_j^T}. \end{aligned}$$

Note that this is possible. Then, it follows

$$\begin{aligned} |V_1 (x) - V_1(y) | \le |\psi _\alpha (\Xi )| \prod _{j=1}^m \min \Big ( c \Vert \rho (x,y)^{{\tilde{E}}^T_j} \Xi _j \Vert _{{\tilde{E}}^T_j} ,1 \Big ) \end{aligned}$$

for the quasi-metric \(\rho \) on \({\mathbb {R}}^{d}\) defined by

$$\begin{aligned} \rho (x,y)= \tau _{E} (x-y), \quad \forall x , y \in {\mathbb {R}}^{d}. \end{aligned}$$

Hence, we have

$$\begin{aligned} |V_1 (x) - V_1(y) | \le g \Big ( \rho (x,y), \Xi \Big ) \end{aligned}$$

with g defined by

$$\begin{aligned} g(h, \xi ) = |\psi _\alpha (\xi )| \prod _{j=1}^m \min \Big ( c \Vert h^{{\tilde{E}}^T_j} \xi _j \Vert _{{\tilde{E}}^T_j} ,1 \Big ) , \end{aligned}$$

so that we precisely recover assumption (i) in [6, Proposition 4.3] for the random field \({\mathcal {G}} = \big ( g(h, \xi ) \big ) _{h \in [0, \infty )}\). Moreover, assumption (ii) immediately follows as in the proof of [6, Proposition 5.1] from the definition of the norms \(\Vert \cdot \Vert _{{\tilde{E}}^T_j}\) and by noting that the product of monotonic functions again is monotonic. It remains to prove assumption (iii) in [6, Proposition 4.3]. To this end we write

$$\begin{aligned} I(h)&:= {\mathbb {E}} [ {\mathcal {G}} ^2(h)] = \int _{{\mathbb {R}}^d}g(h, \xi )^2 \mu (\xi )^{1-\frac{2}{\alpha }} d\xi \\&= \prod _{j=1}^m \int _{{\mathbb {R}}^{d_j}} \min \Big ( c \Vert h^{{\tilde{E}}^T_j} \xi _j \Vert _{{\tilde{E}}^T_j} , 1 \Big ) ^2 \psi _j (\xi _j)^{-2-2\frac{q_j}{\alpha }} \mu _j(\xi _j)^{1-\frac{2}{\alpha }} d\xi _j. \end{aligned}$$

Using equality (3.7) similarly as shown in the calculations made in the proof of [6, Proposition 5.1] we obtain that

$$\begin{aligned} \int _{{\mathbb {R}}^{d_j}} \min \Big ( c \Vert h^{{\tilde{E}}^T_j} \xi _j \Vert _{{\tilde{E}}^T_j} , 1 \Big ) ^2 \psi _j (\xi _j)^{-2-2\frac{q_j}{\alpha }} \mu _j(\xi _j)^{1-\frac{2}{\alpha }} d\xi _j \le c h^{2} | \log (h)| ^{2 (1+\zeta ) (\frac{1}{\alpha } - \frac{1}{2})} \end{aligned}$$

which yields that

$$\begin{aligned} I(h)&= \prod _{j=1}^m \int _{{\mathbb {R}}^{d_j}} \min \Big ( c \Vert h^{{\tilde{E}}^T_j} \xi _j \Vert _{{\tilde{E}}^T_j} , 1 \Big ) ^2 \psi _j (\xi _j)^{-2-2\frac{q_j}{\alpha }} \mu _j(\xi _j)^{1-\frac{2}{\alpha }} d\xi _j\\&\le c h^{2 m} | \log (h)| ^{2m (1+\zeta ) (\frac{1}{\alpha } - \frac{1}{2})}, \end{aligned}$$

so that assumption (iii) in [6, Proposition 4.3] with pm instead of p is fulfilled. Following the lines of the proof of [6, Proposition 5.1], we obtain that there exists a modification \(X_\alpha ^*\) of \(X_\alpha \) such that

$$\begin{aligned} \sup _{\begin{array}{c} x,y \in G_d\\ x \ne y \end{array}} \frac{|X_\alpha ^*(x)-X_\alpha ^*(y)|}{\tau _{E}(x-y) \big [ \log \big ( 1 +\tau _{E}(x-y)^{-1} \big ) \big ]^{\varepsilon +\frac{1}{2} + \frac{1}{\alpha }} } < \infty \quad a.s. \end{aligned}$$

for any \(\varepsilon >0\) and any non-empty compact set \(G_d \subset {{\mathbb {R}}^d}\), which by Lemma 2.2 is equivalent to

$$\begin{aligned} \sup _{\begin{array}{c} x,y \in G_d\\ x \ne y \end{array}} \frac{|X_\alpha ^*(x)-X_\alpha ^*(y)|}{\sum _{j=1}^m\tau _{{\tilde{E}}_j}(x_j-y_j) \big [ \log \big ( 1 +\sum _{j=1}^m\tau _{{\tilde{E}}_j}(x_j-y_j)^{-1} \big ) \big ]^{\varepsilon +\frac{1}{2} + \frac{1}{\alpha }} } < \infty \quad a.s. \end{aligned}$$

This completes the proof. \(\square \)

Corollary 3.5

Under the assumptions of Theorem 3.1, there exist a positive and finite random variable Z and a continuous modification of \(X_\alpha \) such that for any \(s \in (0, \min _{1 \le j \le m} \frac{H_j}{a_{p_j}^j} )\) the uniform Hölder condition (3.3) holds almost surely.

We remark that Corollary 3.5 is not a statement about critical Hölder exponents. However, as a consequence of Theorem 4.1 below, we will see that any continuous version of \(X_\alpha \) admits \(\min _{1 \le j \le m} \frac{H_j}{a_{p_j}^j}\) as the critical exponent.

4 Hausdorff Dimension

We now state our main result on the Hausdorff and box-counting dimension of the graph of \(X_\alpha \) defined in (3.1). In the following, for a set \(B \subset {{\mathbb {R}}^d}\) we denote by \(\underline{\dim _{{\mathcal {B}}}} B\), \(\overline{\dim _{{\mathcal {B}}}} B\) and \(\dim _{{\mathcal {H}}} B\) its lower, upper box-counting and Hausdorff dimension, respectively. We refer to [9] for a definition of these objects.

Theorem 4.1

Suppose that the conditions of Theorem 3.1 hold. Then, for any continuous version of \(X_\alpha \), almost surely

$$\begin{aligned} \dim _{{\mathcal {H}}} G_{X_\alpha } \big ( [0,1]^d \big ) = \dim _{{\mathcal {B}}} G_{X_\alpha } \big ( [0,1]^d \big ) = d+1 - \min _{1 \le j \le m} \frac{H_j}{a_{p_j}^j}, \end{aligned}$$
(4.1)

where

$$\begin{aligned} G_{X_\alpha } \big ( [0,1]^d \big ) = \Big \{ \big ( x, X_\alpha (x) \big ) : x \in [0,1]^d \Big \} \end{aligned}$$

is the graph of \(X_\alpha \) over \([0,1]^d\).

Proof

Let us choose a continuous version of \(X_\alpha \) by Corollary 3.5. From Corollary 3.5, for any \(0< s < \min _{1 \le j \le m} \frac{H_j}{a_{p_j}^j}\), the sample paths of \(X_\alpha \) satisfy almost surely a uniform Hölder condition of order s on \([0,1]^d\). Thus, by a d-dimensional version of [9, Corollary 11.2] we have

$$\begin{aligned} \dim _{{\mathcal {H}}} G_{X_\alpha } \big ( [0,1]^d \big ) \le \overline{\dim _{{\mathcal {B}}}} G_{X_\alpha } \big ( [0,1]^d \big ) \le d+1 - s, \quad a.s. \end{aligned}$$

Letting \(s \uparrow \min _{1 \le j \le m} \frac{H_j}{a_{p_j}^j}\) along rational numbers yields the upper bound in (4.1).

It remains to prove the lower bound in (4.1). Since the inequality

$$\begin{aligned} \underline{\dim _{{\mathcal {B}}}} B\ge \dim _{{\mathcal {H}}} B \end{aligned}$$

holds for every \(B \subset {{\mathbb {R}}^d}\) (see [9, Chapter 3.1]), it suffices to show

$$\begin{aligned} \dim _{{\mathcal {H}}} G_{X_\alpha } \big ( [0,1]^d \big ) \ge d+1 - \min _{1 \le j \le m} \frac{H_j}{a_{p_j}^j}, \quad a.s. \end{aligned}$$

Further, note that, since \(Q = [ \frac{1}{2} , 1]^d \subset [0,1]^d\), we have

$$\begin{aligned} \dim _{{\mathcal {H}}} G_{X_\alpha } \big ( [0,1]^d \big ) \ge \dim _{{\mathcal {H}}} G_{X_\alpha } (Q) \end{aligned}$$

by monotonicity of the Hausdorff dimension. Thus, it is even enough to show that

$$\begin{aligned} \dim _{{\mathcal {H}}} G_{X_\alpha } (Q) \ge d+1 - \min _{1 \le j \le m} \frac{H_j}{a_{p_j}^j}, \quad a.s. \end{aligned}$$
(4.2)

We will show this by combining the methods used in [2, 4, 5]. From now on, without loss of generality, we will assume that

$$\begin{aligned} \min _{1 \le j \le m} \frac{H_j}{a_{p_j}^j} = \frac{H_1}{a_{p_1}^1} . \end{aligned}$$

Let \(\gamma > 1\). Following the argument in [4, Theorem 5.6], in view of the Frostman criterion [9, Theorem 4.13 (a)], it suffices to show that

$$\begin{aligned} I_\gamma := \int _{ Q \times Q } {\mathbb {E}} \Big [ \big ( \Vert x-y \Vert ^2 + |X_\alpha (x) - X_\alpha (y)|^2 \big ) ^{-\frac{\gamma }{2}} \Big ] dx dy < \infty \end{aligned}$$

in order to obtain \(\dim _{{\mathcal {H}}} G_{X_\alpha } (Q) \ge \gamma \) almost surely.

Using the characteristic function of the symmetric \(\alpha \)-stable random field \(X_\alpha \), as in the proof of [5, Proposition 5.7], it can be shown that there is a constant \(C_1 >0\) such that

$$\begin{aligned} I_\gamma \le C_1 \int _{Q \times Q} \Vert x-y \Vert ^{1-\gamma } \sigma ^{-1} (x,y) dx dy, \end{aligned}$$

where

$$\begin{aligned} \sigma (x,y) = \Vert X_\alpha (x) - X_\alpha (y) \Vert _\alpha . \end{aligned}$$

Combining this with Theorem 4.2 below we get

$$\begin{aligned} I_\gamma \le {\tilde{C}}_1 \int _{Q_1 \times Q_1} \Vert x_1-y_1 \Vert ^{1-\gamma } \tau _{{\tilde{E}}_1} (x_1 - y_1)^{-H_1} dx_1 dy_1, \end{aligned}$$

for some \({\tilde{C}}_1>0\) and \(Q_1 = [ \frac{1}{2} , 1]^{d_1}\). With this inequality the assertion readily follows from the proof of [4, Theorem 5.6]. \(\square \)

The following Theorem is crucial for proving Theorem 4.1 and its proof is based on [23, Theorem 1]. See also [24,25,26]. Let us remark that a similar method of the following proof has been applied in [24, Theorem 3.4] for certain \(\alpha \)-stable random fields if \(1\le \alpha \le 2\). In the following, we are able to extend this method for \(0<\alpha <1\) and, in particular this shows that the statement of [24, Theorem 3.5] can be formulated for \(0<\alpha <1\) as well.

Theorem 4.2

There exists a constant \(C_4 >0\), depending on \(H_1, \ldots , H_m, q_1, \ldots , q_m\) and d only, such that for all \(x = (x_1, \ldots x_m), y =(y_1, \ldots y_m) \in [\frac{1}{2} , 1)^{d_1} \times \cdots \times [\frac{1}{2} , 1)^{d_m}\) we have

$$\begin{aligned} \sigma (x,y) \ge C_4 \cdot \tau _{{\tilde{E}}_1} (x_1 - y_1)^{H_1} , \end{aligned}$$

where \(\tau _{{\tilde{E}}_1} ( \cdot )\) is the radial part with respect to \({\tilde{E}}_1\).

Proof

Throughout this proof, we fix \(x = (x_1, \ldots , x_m)\), \(y =(y_1, \ldots y_m) \in [\frac{1}{2} , 1)^{d_1} \times \cdots \times [\frac{1}{2} , 1)^{d_m}\). We will show that

$$\begin{aligned} \sigma (x,y) \ge C r^{H_1} \end{aligned}$$
(4.3)

for some constant \(C>0\) independent of x and y and \(r = \tau _{{\tilde{E}}_1} (x_1 - y_1)\). Without loss of generality we will assume that \(r>0\), since for \(r=0\) (4.3) always holds. By definition, we have

$$\begin{aligned} \sigma ^\alpha (x,y) = \int _{{\mathbb {R}}^d}| \prod _{j=1}^m ( e^{i \langle x_j, \xi _j \rangle } - 1) - \prod _{j=1}^m ( e^{i \langle y_j, \xi _j \rangle } - 1) |^\alpha \prod _{j=1}^m | \psi _j (\xi _j) | ^{-\alpha H_j - q_j} d\xi . \end{aligned}$$
(4.4)

Now, for every \(j=1, \ldots , m\) we consider a so-called bump function \(\delta _j \in C^{\infty } ({\mathbb {R}}^{d_j} )\) with values in [0, 1] such that \(\delta _j (0) = 1\) and \(\delta _j\) vanishes outside the open ball

$$\begin{aligned} B (K_j, 0) = \{ z \in {\mathbb {R}}^{d_j} : \tau _{{\tilde{E}}_j} (z) < K_j \} \end{aligned}$$

for

$$\begin{aligned} K_j&= \min \Big \{ 1, \frac{K_1^j}{K_2^j} \Big ( \sqrt{d_1} \frac{1}{2}\Big ) ^{\frac{1}{a_1^j} - \frac{1}{a_{p_j}^j} + 2 \varepsilon } , \frac{K_3^j}{K_4^j} \Big ( \sqrt{d_1} \frac{1}{2}\Big ) ^{\frac{1}{a_{p_j}^j} - \frac{1}{a_{1}^j} - 2 \varepsilon } , \frac{K_1^j}{K_4^j}, \frac{K_3^j}{K_2^j}, \\&\quad K_1^j \Big ( \sqrt{d_1} \frac{1}{2}\Big ) ^{\frac{1}{a_1^j} + \varepsilon } , K_3^j \Big ( \sqrt{d_1} \frac{1}{2}\Big ) ^{\frac{1}{a_{p_j}^j} - \varepsilon } \Big \} , \end{aligned}$$

where \(\varepsilon >0\) is some (sufficiently) small number and \(K_1^j , \ldots , K_4^j\) are the suitable constants derived from Lemma 2.1 corresponding to the matrix \({\tilde{E}}_j\). The choice of the constant \(K_j >0\) will be clear later in this proof. Let \({\hat{\delta }}_j\) be the Fourier transform of \(\delta _j\). It can be verified that \({\hat{\delta }}_j \in C^\infty ( {\mathbb {R}}^{d_j} )\) as well and that \({\hat{\delta }}_j (\lambda _j)\) decays rapidly as \(\Vert \lambda _j \Vert \rightarrow \infty \). By the Fourier inversion formula, we have

$$\begin{aligned} \delta _j (s_j) = \frac{1}{(2 \pi ) ^{d_j}} \int _{{\mathbb {R}}^{d_j} } e^{-i \langle s_j, \lambda _j \rangle } {\hat{\delta }}_j (\lambda _j) d\lambda _j \end{aligned}$$
(4.5)

for all \(s_j \in {\mathbb {R}}^{d_j}\). Let \(\delta _1^r (s_1) = \frac{1}{r^{q_1}} \delta _1 \big ( (\frac{1}{r} )^{{\tilde{E}}_1} s_1 \big ) .\) Then, by a change of variables in (4.5), for all \(s_1 \in {\mathbb {R}}^{d_1}\) we obtain

$$\begin{aligned} \delta _1^r (s_1) = \frac{1}{(2 \pi ) ^{d_1}} \int _{{\mathbb {R}}^{d_1} } e^{-i \langle s_1, \lambda _1 \rangle } {\hat{\delta }}_1 (r^{{\tilde{E}}_1^T}\lambda _1) d\lambda _1 . \end{aligned}$$
(4.6)

Using Lemma 2.1 and the fact that \(\tau _{{\tilde{E}}_1} ( \cdot )\) is \({\tilde{E}}_1\)-homogeneous, it is straightforward to see that \(\tau _{{\tilde{E}}_j} (x_j) \ge K_j\), \(\tau _{{\tilde{E}}_1} \big ( (\frac{1}{r})^{{\tilde{E}}_1}(x_1 - y_1) \big ) \ge K_1\) and \(\tau _{{\tilde{E}}_1} \big ( (\frac{1}{r})^{{\tilde{E}}_1}x_1 \big ) \ge K_1\) for \(r=\tau _{{\tilde{E}}_1} (x_1-y_1)>0\). Therefore, we have \(\delta _1^r (x_1) = 0\), \(\delta _1^r (x_1 - y_1) = 0\) and \(\delta _j (x_j) = 0\) for all \(j =2, \ldots , m\). Hence, combining this with (4.5) and (4.6) it follows that

$$\begin{aligned} I&: = \int _{{\mathbb {R}}^d}\Big ( \prod _{j=1}^m ( e^{i \langle x_j, \lambda _j \rangle } - 1) - \prod _{j=1}^m ( e^{i \langle y_j, \lambda _j \rangle } - 1) \Big ) \nonumber \\&\quad \cdot e^{- i \langle x_1, \lambda _1 \rangle } {\hat{\delta }}_1 (r^{{\tilde{E}}_1^T} \lambda _1 ) \prod _{j=2}^m {\hat{\delta }}_j (\lambda _j) d \lambda \nonumber \\&= (2 \pi )^d \Big ( \delta _1^r (0) - \delta _1^r (x_1) \Big ) \prod _{j=2}^m \Big ( \delta _j(0) - \delta _j (x_j) \Big ) \nonumber \\&\quad - (2 \pi )^d \Big ( \delta _1^r (x_1 - y_1) - \delta _1^r (x_1) \Big ) \prod _{j=2}^m \Big ( \delta _j(x_j-y_j) - \delta _j (x_j) \Big ) \nonumber \\&= (2 \pi )^d \frac{1}{r^{q_1}} . \end{aligned}$$
(4.7)

For every \(\alpha \in (0,2)\) we can choose \(k \in {\mathbb {N}}\) such that \(k \alpha \ge 1\) and let \(\beta ' >1\) be the constant such that \(\frac{1}{k\alpha } + \frac{1}{\beta '} = 1\). We first show that

$$\begin{aligned}&\Big ( \int _{{\mathbb {R}}^d}| \prod _{j=1}^m ( e^{i \langle x_j, \lambda _j \rangle } - 1) - \prod _{j=1}^m ( e^{i \langle y_j, \lambda _j \rangle } - 1) |^{k\alpha } \prod _{j=1}^m | \psi _j (\lambda _j) | ^{-\alpha H_j - q_j} d\lambda \Big ) ^{\frac{1}{k\alpha }} \nonumber \\&\quad \le 2^{k \alpha (m+1)} \sigma (x,y)^{\frac{1}{k}} . \end{aligned}$$
(4.8)

For \(\lambda = (\lambda _1, \ldots , \lambda _m) \in {\mathbb {R}}^{d_1} \times \cdots \times {\mathbb {R}}^{d_m}\), let

$$\begin{aligned} z (\lambda ) = \prod _{j=1}^m ( e^{i \langle x_j, \lambda _j \rangle } - 1) - \prod _{j=1}^m ( e^{i \langle y_j, \lambda _j \rangle } - 1) \end{aligned}$$

and note that, since \(| e^{it} -1 |^2 = 2 - 2 \cos t \le 4\) for all \(t \in {\mathbb {R}}\), it follows that

$$\begin{aligned} |z (\lambda )| \le \prod _{j=1}^m | e^{i \langle x_j, \lambda _j \rangle } - 1| + \prod _{j=1}^m | e^{i \langle y_j, \lambda _j \rangle } - 1| \le 2^{m+1}. \end{aligned}$$

From this, we obtain

$$\begin{aligned}&\Big ( \int _{{\mathbb {R}}^d}| z (\lambda ) |^{k\alpha } \prod _{j=1}^m | \psi _j (\lambda _j) | ^{-\alpha H_j - q_j} d\lambda \Big ) ^{\frac{1}{k\alpha }} \\&\quad = \Big ( \int _{\{\lambda \in {{\mathbb {R}}^d}: | z (\lambda ) | \le 1 \} } | z (\lambda ) |^{k\alpha } \prod _{j=1}^m | \psi _j (\lambda _j) | ^{-\alpha H_j - q_j} d\lambda \\&\qquad + \int _{\{\lambda \in {{\mathbb {R}}^d}: | z (\lambda ) |> 1 \} } | z (\lambda ) |^{k\alpha } \prod _{j=1}^m | \psi _j (\lambda _j) | ^{-\alpha H_j - q_j} d\lambda \Big ) ^{\frac{1}{k\alpha }} \\&\quad \le \Big ( \int _{\{\lambda \in {{\mathbb {R}}^d}: | z (\lambda ) | \le 1 \} } | z (\lambda ) |^{\alpha } \prod _{j=1}^m | \psi _j (\lambda _j) | ^{-\alpha H_j - q_j} d\lambda \\&\qquad + \int _{\{\lambda \in {{\mathbb {R}}^d}: | z (\lambda ) | > 1 \} } | z (\lambda ) |^{k\alpha + \alpha } \prod _{j=1}^m | \psi _j (\lambda _j) | ^{-\alpha H_j - q_j} d\lambda \Big ) ^{\frac{1}{k\alpha }} \\&\quad \le (2^{m+1})^{k \alpha } \Big ( \int _{{\mathbb {R}}^d}| z (\lambda ) |^{\alpha } \prod _{j=1}^m | \psi _j (\lambda _j) | ^{-\alpha H_j - q_j} d\lambda \Big ) ^{\frac{1}{k\alpha }} = (2^{m+1})^{k \alpha } \sigma (x,y)^{\frac{1}{k}}. \end{aligned}$$

Now, using (4.8), by Hölder’s inequality and (4.4) we have

$$\begin{aligned} I&\le \Big ( \int _{{\mathbb {R}}^d}| \prod _{j=1}^m ( e^{i \langle x_j, \lambda _j \rangle } - 1) - \prod _{j=1}^m ( e^{i \langle y_j, \lambda _j \rangle } - 1) |^{k\alpha } \prod _{j=1}^m | \psi _j (\lambda _j) | ^{-\alpha H_j - q_j} d\lambda \Big ) ^{\frac{1}{k\alpha }} \nonumber \\&\quad \cdot \Big ( \int _{{\mathbb {R}}^d}\frac{1}{\big ( \prod _{j=1}^m | \psi _j (\lambda _j) | ^{-\alpha H_j - q_j} \big )^{\frac{\beta '}{k\alpha }} } | {\hat{\delta }}_1 (r^{{\tilde{E}}_1^T} \lambda _1 ) \prod _{j=2}^m {\hat{\delta }}_j (\lambda _j) |^{\beta '} d\lambda \Big ) ^{\frac{1}{\beta '}} \nonumber \\&\le 2^{k \alpha (m+1)} \sigma (x,y)^{\frac{1}{k}} \cdot r^{-\frac{H_1}{k} - \frac{q_1}{k\alpha }- \frac{q_1}{\beta '}} \nonumber \\&\quad \cdot \Big ( \int _{{\mathbb {R}}^d}\frac{1}{\big ( \prod _{j=1}^m | \psi _j (\lambda _j) | ^{-H_j - \frac{q_j}{\alpha }} \big )^{\beta '} } \prod _{j=1}^m | {\hat{\delta }}_j (\lambda _j) |^{\beta '} d\lambda \Big ) ^{\frac{1}{\beta '}} \nonumber \\&= \tilde{{\tilde{C}}} \cdot \big ( \sigma (x,y) \cdot r^{-H_1 - kq_1} \big ) ^{\frac{1}{k}}, \end{aligned}$$
(4.9)

where \(\tilde{{\tilde{C}}} >0\) is a constant, which only depends on \(H_1, \ldots , H_m, q_1, \ldots , q_m, k, \alpha , d\) and \(\delta \). It is clear that (4.3) follows from (4.7) and (4.9). This finishes the proof of the Theorem. \(\square \)

As an immediate consequence of Theorem 4.1, we obtain the following.

Corollary 4.3

Let the assumptions of Theorem 3.1 hold. Then any continuous version of \(X_\alpha \) admits \(\min _{1 \le j \le m} \frac{H_j}{a_{p_j}^j}\) as the Hölder critical exponent.

Remark 4.4

Let \(\alpha = 2, d_j = {\tilde{E}}_j = 1\) for all \(j=1, \ldots , m\) and consider the function \(\psi (\xi _j) = | \xi _j |\) for all \(\xi _j \in {\mathbb {R}}\). Clearly, \(\psi _j\) is a homogeneous function and satisfies \(\psi _j (\xi _j ) \ne 0\) for all \(\xi _j \ne 0\). Thus, by Theorem 3.1, we can define

$$\begin{aligned} X_2 (x) = {\text {Re}} \int _{{\mathbb {R}}^d}\prod _{j=1}^d ( e^{i x_j \xi _j} -1 ) |\xi _j|^{-H_j -\frac{1}{2}} W_2 (d \xi ), \quad x= (x_1, \ldots , x_d) \in {{\mathbb {R}}^d}, \end{aligned}$$

for all \(0<H_j<1, j=1, \ldots , d\) and the statement of Theorem 4.1 becomes

$$\begin{aligned} \dim _{{\mathcal {H}}} G_{X_2} \big ( [0,1]^d \big ) = \dim _{{\mathcal {B}}} G_{X_2} \big ( [0,1]^d \big ) = d+1 - \min _{1 \le j \le d} H_j, \quad a.s. \end{aligned}$$

Further, up to a multiplicative constant, the random field \(X_2\) is a fractional Brownian sheet with Hurst indices \(H_1, \ldots , H_d\) (see [10]). Thus, Theorem 4.1 can be seen as a generalization of [2, Theorem 1.3]. Further, as noted above, for \(m=1, d=d_1\) and \(E=E_1 = {\tilde{E}}_1\) the random field \(X_\alpha \) given by (3.1) coincides with the random field in [4, Theorem 4.1] and the statement of Theorem 4.1 becomes

$$\begin{aligned} \dim _{{\mathcal {H}}} G_{X_\alpha } \big ( [0,1]^d \big ) = \dim _{{\mathcal {B}}} G_{X_\alpha } \big ( [0,1]^d \big ) = d+1 - \frac{H_1}{a^1_{p_1}}, \quad a.s. \end{aligned}$$

which is the statement of [4, Theorem 5.6] for \(\alpha =2\) and [5, Proposition 5.7] for \(\alpha \in (0,2)\). We finally remark that Theorem 4.1 can be proven similarly, if we replace \([0,1]^d\) in (4.1) by any other compact cube of \({{\mathbb {R}}^d}\).

We finally remark that the Hausdorff dimension only depends on solely one index \(\min _{1 \le j \le m} \frac{H_j}{a_{p_j}^j}\), whereas in higher space dimensions all indices are relevant and the corresponding results in more general dimensions can be found in [18]. See also [19, 20] for related results.