1 Introduction

Fractional calculus (the abbreviation of noninteger order integration and differentiation) can be regarded as the generalization of the traditional calculus. Because the fractional operators can very well reflect the anomalous phenomenon in the real-world problems, it has become a hot topic [9]. Till now, there are various kinds of definitions of fractional integrals and derivatives, such as Riemann-Liouville, Caputo, Riesz, and Hadamard integrals and/or derivatives [9, 10, 27, 31]. Amongst all these fractional integrals and derivatives, Riemann-Liouville, Caputo, and Riesz integrals and/or derivatives are mostly used. On the other hand, the Hadamard derivative seems worth studying in depth, because it is quite suitable for characterizing the Lomnitz creep law in elasticities [24,25,26]. Very recently, this kind of fractional derivatives has been used to successfully predict COVID-19 dynamics of Omicron variant [1].

In the following, we introduce several definitions which will be utilized later on. The Hadamard fractional derivative of order \(\alpha >0\) is defined as [5, 9]

$$\begin{aligned} _{\text{H}}\mathrm{{D}}_{a,t}^{\alpha }y(t)&=\delta ^{n} \left[ \,_{\text{H}}\mathrm{{D}}_{a,t}^{-(n-\alpha )}y(t)\right] \nonumber \\&=\frac{1}{\Gamma (n-\alpha )}\delta ^{n}\int _{a}^{t} \left( \log {t}-\log {s}\right) ^{n-\alpha -1}y(s)\frac{\mathrm{{d}}s}{s}, \end{aligned}$$
(1)

where \(\delta ^n g(t)=\left( t\frac{\textrm{d}}{\textrm{d}t}\right) ^n g(t)=\delta (\delta ^{n-1}g(t))~ \textrm{with}~\delta ^0\,g(t)=g(t)\) for a given g(t), \(t>a>0\), \(n-1<\alpha <n\in \mathbb {Z}^+,\) \(y(t)\in AC_\delta ^n[a,b]=\{y{:}\,[a,b]\rightarrow \mathbb {R}{:}\, \delta ^{n-1}y(t)\in AC[a,b]\},\) and AC[ab] is the usual space of functions which are absolutely continuous on [ab]. Compared with the Riemann-Liouville derivative, the Hadamard derivative has two main features: one is that the Hadamard derivative contains the logarithmic kernel \((\log t-\log s)\) instead of \((t-s)\), and the other is that the Hadamard derivative can be viewed as a generalization of the operator \(\left(t\frac{\textrm{d}}{\textrm{d}t}\right)^n\), while the Riemann-Liouville derivative is seen as an extension of the classical differential operator \(\frac{\textrm{d}^n}{\textrm{d}t^n}\).

From the definition, it is clear that the Hadamard derivative is well used in pure mathematics. Nevertheless, it is somewhat in a weak position in the application areas [5, 14]. Therefore, another somewhat different definition was proposed by modifying the Hadamard derivative, known as the Caputo-Hadamard derivative whose expression first appeared in [9]. It is just got by changing the order of its differentiation and integration that is read as follows:

$$\begin{aligned} _{\text{CH}}\mathrm{{D}}^{\alpha }_{a,t}{y(t)}&= \,_{\text{H}}\mathrm{{D}}_{a,t}^{-(n-\alpha )}[\delta ^{n}y(t)]\nonumber \\&=\frac{1}{\Gamma (n-\alpha )} \int _{a}^{t} \left( \log {t}-\log {s}\right) ^{n-\alpha -1}\delta ^{n}y(s) \frac{\textrm{d}s}{s}, \end{aligned}$$
(2)

where \(t>a>0\), \(n-1<\alpha <n\in \mathbb {Z}^+\), \(y(t)\in AC_\delta ^n[a,b]\).

With the development of fractional calculus, an increasing number of discrete Gronwall inequality generalizations have been discovered to address difficulties encountered in fractional differential equations [8, 22, 23]. However, all these discrete Gronwall inequalities are used to solve the fractional differential equations with Caputo derivatives. Until now, few discrete Gronwall inequalities have been derived for solving the Caputo-Hadamard fractional differential equations except [6].

The first purpose of this paper is to update the existing fractional Gronwall inequalities, allowing them to be used in the numerical analysis of the Caputo-Hadamard fractional differential equations. Another objective is to study the L1/LDG method for the following Caputo-Hadamard fractional sub-diffusion equation [13, 14, 30]:

$$\begin{aligned} \left\{ \begin{aligned}&_{\text{CH}}\mathrm{{D}}^{\alpha }_{a,t}u(x,t)-\Delta u(x,t)=f(x,t),\, (x,t)\in \varOmega \times (a,T], \\&u(x,a)=u_{a}(x),\, x\in \varOmega , \\&u(x,t)=0,\,(x,t)\in \partial \varOmega \times (a,T], \end{aligned}\right. \end{aligned}$$
(3)

where \(0<\alpha <1\), \(\varOmega =(0, L)\) is a bounded domain, \(T>0\) is a fixed final time, and the source term f(xt) and the initial value \(u_{a}(x)\) are given functions. The nonuniform L1 formula coupled with the LDG spatial discretization has been used in the literature [14] to solve (3). However, the obtained error bound blows up if \(\alpha \rightarrow 1^-\). Such error bound is called \(\alpha \)-nonrobust [3]. This paper aims to analyze the stability and error estimates of the uniform and nonuniform L1/LDG methods by using the newly derived discrete Gronwall inequalities and to demonstrate the \(\alpha \)-robustness of these methods, that is, the error bounds do not blow up when \(\alpha \rightarrow 1^-\), which is the second contribution of this paper.

The present article is organized as follows. In Sect. 2, two discrete forms of the fractional Gronwall inequalities are presented that can be used in the numerical analysis of the Caputo-Hadamard fractional partial differential equation. In Sect. 3, the uniform and nonuniform L1/LDG methods for solving the Caputo-Hadamard fractional sub-diffusion equation (3) are displayed. By using the established Gronwall inequalities, the stability and the convergence of the proposed methods are analyzed. Numerical examples are given in Sect. 4 to confirm the theoretical error analysis. The last section contains brief conclusions and remarks.

2 Discrete Gronwall Inequality

This section is devoted to deducing the discrete Gronwall inequalities that can be used in the numerical analysis of the time-fractional partial differential equation with the Caputo-Hadamard derivative.

2.1 Uniform L1 Approximation in the Sense of Logarithm

We firstly introduce the uniform L1 temporal discretization in the sense of logarithm for the Caputo-Hadamard derivative in a time interval [aT]. For an arbitrarily given \(T>a\), let M be a positive integer and define \(t_n=a(T/a)^{n/M}\) for \(n=0,1,\cdots ,M\). It can be seen that the interval \([\log a,\log T]\) is divided into \(\log a=\log t_0<\log t_1<\cdots<\log t_{n-1}<\log t_{n}<\cdots <\log t_{M}=\log T\) with \(\tau =\log t_{n+1}-\log t_n=\frac{1}{M}\log \frac{T}{a}\), \(n=0,1,\cdots ,M-1\). Denote \(u^n=u(\cdot ,t_n)\). The uniform L1 formula in the sense of logarithm for the Caputo-Hadamard time-fractional derivative of order \(0<\alpha <1\) at the time \(t_n\) \((1\leqslant n\leqslant M)\) is given by [6]

$$\begin{aligned} _{\text{CH}}\mathrm{{D}}^{\alpha }_{a,t}u(x, t)\Big |_{t=t_{n}}=\frac{\tau ^{-\alpha }}{\Gamma (2-\alpha )} \sum _{j=1}^{n}a_{n-j}\left( u(x,t_{j})-u(x,t_{j-1})\right) +R^{n} \!\!{:}{=} \,\delta _{\log }^{\alpha }u^{n}+R^{n}, \end{aligned}$$
(4)

where

$$\begin{aligned} a_j=(j+1)^{1-\alpha }-j^{1-\alpha },\,j\geqslant 0. \end{aligned}$$
(5)

Lemma 1

[6] Suppose that \(0<\alpha <1\) and \(u(\cdot ,t)\in C^{2}[a,T]\). Then, the coefficients \(a_j\) \((j=0,1,\cdots ,M)\) satisfy

$$\begin{aligned} 1=a_0>a_1>\cdots>a_M>0, \end{aligned}$$

and the local truncation error \(R^{n}\ (1\leqslant n\leqslant M)\) in (4) has the following estimate:

$$\begin{aligned} |R^{n}|\leqslant C\tau ^{2-\alpha }. \end{aligned}$$

For a sequence of functions \(\{w^n\}_{n=0}^M\), we define

$$\begin{aligned} \delta _{\log }^{\alpha }w^{n}=\frac{\tau ^{-\alpha }}{\Gamma (2-\alpha )} \sum _{j=1}^{n}a_{n-j}\left( w^j-w^{j-1}\right) =\frac{\tau ^{-\alpha }}{\Gamma (2-\alpha )}\sum _{j=0}^nb_{n-j}w^j, \end{aligned}$$
(6)

where \(n=1,\cdots ,M\), and

$$\begin{aligned} b_0=a_0,\,b_n=-a_{n-1},\,b_{n-j}=a_{n-j}-a_{n-j-1},\,j=1,\cdots ,n-1. \end{aligned}$$

Lemma 2

[21] Let \(p_n\) be a sequence defined by

$$\begin{aligned} p_0=1,\,p_n=\sum _{j=1}^n(a_{j-1}-a_j)p_{n-j},\,n\geqslant 1. \end{aligned}$$

Then, it holds that

\(\mathrm{{(i)}}\) \(\displaystyle 0<p_n<1,\,\sum _{j=k}^n p_{n-j}a_{j-k}=1\; {for}\;1\leqslant k\leqslant n;\)

\(\mathrm{{(ii)}}\) \(\displaystyle \Gamma (2-\alpha )\sum _{j=1}^np_{n-j}\leqslant \frac{n^\alpha }{\Gamma (1+\alpha )};\)

\(\mathrm{{(iii)}}\) \(\displaystyle \frac{\Gamma (2-\alpha )}{\Gamma (1+(m-1)\alpha )} \sum _{j=1}^{n-1}p_{n-j}j^{(m-1)\alpha }\leqslant \frac{n^{m\alpha }}{\Gamma (1+m\alpha )}\;{for}\;m\in \mathbb {Z}^+.\)

Definition 1

Let \({\textbf{x}}=(x_1,x_2,\cdots ,x_n)^\top \) and \({\textbf{y}}=(y_1,y_2,\cdots ,y_n)^\top \) be two vectors. For \(i=1,2,\cdots ,n\), if \(x_i\leqslant y_i\), \({\textbf{x}}\) is said to be less than or equal to \({\textbf{y}}\), denoted as \({\textbf{x}}\leqslant {\textbf{y}}\).

Lemma 3

Let \(\textbf{e}=(1,1,\cdots ,1)^\top \in \mathbb {R}^n\) and

$$\begin{aligned} J=2\Gamma (2-\alpha )\lambda \tau ^\alpha \begin{pmatrix} 0&{}p_1&{}\cdots &{}p_{n-2}&{}p_{n-1}\\ 0&{}0&{}\cdots &{}p_{n-3}&{}p_{n-2}\\ \vdots &{}\vdots &{} &{}\vdots &{}\vdots \\ 0&{}0&{}\cdots &{}0&{}p_1\\ 0&{}0&{}\cdots &{}0&{}0 \end{pmatrix}_{n\times n}. \end{aligned}$$
(7)

Then, it holds that

  1. (i)

    \(\displaystyle J^i=0,\,i\geqslant n;\)

  2. (ii)

    \(\displaystyle J^m\textbf{e}\leqslant \frac{1}{\Gamma (1+m\alpha )} \Big (\big (2\lambda (\log t_n-\log a)^\alpha \big )^m, \big (2\lambda (\log t_{n-1}-\log a)^\alpha \big )^m,\cdots , \big (2\lambda (\log t_1-\log a)^\alpha \big )^m\Big )^\top ,\,m=0,1,2,\cdots; \)

  3. (iii)

    \(\displaystyle \sum _{j=0}^iJ^j\textbf{e}=\sum _{j=0}^{n-1}J^j\textbf{e}\leqslant \Big (E_{\alpha }\big (2\lambda (\log t_n-\log a)^\alpha \big ), E_{\alpha }\big (2\lambda (\log t_{n-1}-\log a)^\alpha \big ),\cdots , E_{\alpha }\big (2\lambda (\log t_1-\log a)^\alpha \big )\Big )^\top ,\,i\geqslant n.\)

Here, \(\displaystyle E_{\alpha }(z)=\sum _{k=0}^{\infty }\frac{z^k}{\Gamma (k\alpha +1)}\) is the Mittag-Leffler function.

Proof

Since J is an upper triangular matrix, it is clear that (i) holds.

We prove (ii) by the mathematical induction. If \(m=0\), then \(J^m\textbf{e}=\textbf{e}\), which is identical to (ii). Assume that (ii) is valid for \(m=k\). We want to prove (ii) for \(m=k+1\). From (7) and the inductive hypothesis, we have

$$\begin{aligned} \begin{aligned} J^{k+1}\textbf{e}&=JJ^k\textbf{e}\\&\leqslant \frac{J}{\Gamma (1+k\alpha )} \Big (\big (2\lambda (\log t_n-\log a)^\alpha \big )^k, \big (2\lambda (\log t_{n-1}-\log a)^\alpha \big )^k,\\&\quad\cdots, \big (2\lambda (\log t_1-\log a)^\alpha \big )^k\Big )^\top \\&=\frac{\Gamma (2-\alpha )(2\lambda \tau ^\alpha )^{k+1}}{\Gamma (1+k\alpha )}\left( \sum _{j=1}^{n-1}p_{n-j}j^{k\alpha },\sum _{j=1}^{n-2}p_{n-1-j}(j-1)^{k\alpha },\cdots ,p_11^{k\alpha },0\right) ^\top . \end{aligned} \end{aligned}$$
(8)

Noting \(\log t_n-\log a=n\tau \) and applying Lemma 2, we obtain

$$\begin{aligned} \begin{aligned} J^{k+1}\textbf{e}&\leqslant \frac{(2\lambda \tau ^\alpha )^{k+1}}{\Gamma (1+(k+1)\alpha )}\left( n^{(k+1)\alpha },(n-1)^{(k+1)\alpha },\cdots ,2^{(k+1)\alpha },1^{(k+1)\alpha }\right) ^\top \\&=\frac{1}{\Gamma (1+(k+1)\alpha )} \Big (\big (2\lambda (\log t_n-\log a)^\alpha \big )^{k+1}, \big (2\lambda (\log t_{n-1}-\log a)^\alpha \big )^{k+1},\\&\quad \cdots ,\big (2\lambda (\log t_1-\log a)^\alpha \big )^{k+1}\Big )^\top . \end{aligned} \end{aligned}$$
(9)

Thus, (ii) holds for \(m=k+1\).

By using (i), one has

$$\begin{aligned} \sum _{j=0}^i{J}^j\textbf{e}=\sum _{j=0}^{n-1}{J}^j\textbf{e},\,i\geqslant n. \end{aligned}$$

Then, (ii) leads to

$$\begin{aligned} \begin{aligned} \sum _{j=0}^{n-1}J^j\textbf{e}\leqslant &\sum _{j=0}^{n-1}\frac{1}{\Gamma (1+j\alpha )} \Big (\big (2\lambda (\log t_n-\log a)^\alpha \big )^{j}, \big (2\lambda (\log t_{n-1}-\log a)^\alpha \big )^{j},\\& \cdots ,\big (2\lambda (\log t_1-\log a)^\alpha \big )^{j}\Big )^\top \\ \leqslant &\Big (E_{\alpha }\big (2\lambda (\log t_n-\log a)^\alpha \big ), E_{\alpha }\big (2\lambda (\log t_{n-1}-\log a)^\alpha \big ),\\& \cdots, E_{\alpha }\big (2\lambda (\log t_1-\log a)^\alpha \big )\Big )^\top , \end{aligned} \end{aligned}$$

which means that (iii) is true.

The proof is completed.

Lemma 4

(Gronwall Inequality I) Let \(\lambda _1\) and \(\lambda _2\) be nonnegative constants, and let \(\{w^n\}_{n=0}^\infty \) and \(\{g^n\}_{n=0}^\infty \) be sequences of nonnegative numbers satisfying

$$\begin{aligned} \delta _{\log }^{\alpha }w^{n}\leqslant \lambda _1w^n+\lambda _2w^{n-1}+g^n,\;n\geqslant 1. \end{aligned}$$
(10)

Then, when \(\tau \leqslant \root \alpha \of {\frac{1}{2\Gamma (2-\alpha )\lambda _1}}\), one has

$$\begin{aligned} w^n\leqslant 2E_\alpha \left( 2\lambda (\log t_n-\log a)^\alpha \right) \left( w^0+\frac{(\log t_n-\log a)^\alpha }{\Gamma (1+\alpha )}\max _{0\leqslant j\leqslant n}g^j\right) ,\,n=1,2,\cdots ,M, \end{aligned}$$

where \(\lambda =\lambda _1+\frac{\lambda _2}{2-2^{1-\alpha }}\).

Proof

Using the definition of the operator \(\delta _{\log }^{\alpha }\) and (10), we have

$$\begin{aligned} \sum _{k=1}^ja_{j-k}\delta _tw^k\leqslant \Gamma (2-\alpha )\tau ^\alpha (\lambda _1w^j+\lambda _2w^{j-1})+\Gamma (2-\alpha )\tau ^\alpha g^j. \end{aligned}$$
(11)

Multiplying inequality (11) by \(p_{n-j}\) (see Lemma 2) and summing j from 1 to n, we get

$$\begin{aligned} \begin{aligned}&\sum _{j=1}^np_{n-j}\sum _{k=1}^ja_{j-k}\delta _tw^k\\&\leqslant \Gamma (2-\alpha )\tau ^\alpha \sum _{j=1}^np_{n-j}(\lambda _1w^j+\lambda _2 w^{j-1})+\Gamma (2-\alpha )\tau ^\alpha \sum _{j=1}^np_{n-j}g^j. \end{aligned} \end{aligned}$$
(12)

Invoking (i) and (ii) in Lemma 2, we obtain

$$\begin{aligned} \sum _{j=1}^np_{n-j}\sum _{k=1}^ja_{j-k}\delta _tw^k=\sum _{k=1}^n\delta _tw^k\sum _{j=k}^np_{n-j}a_{j-k}=\sum _{k=1}^n\delta _tw^k=w^n-w^0,\,n\geqslant 1, \end{aligned}$$

and

$$\begin{aligned} \begin{aligned} \Gamma (2-\alpha )\tau ^\alpha \sum _{j=1}^np_{n-j}g^j&\leqslant \Gamma (2-\alpha )\tau ^\alpha \max _{1\leqslant j\leqslant n}g^j \sum _{j=1}^np_{n-j}\\&\leqslant \frac{(\log t_n-\log a)^\alpha }{\Gamma (1+\alpha )}\max _{1\leqslant j\leqslant n}g^j,\,n\geqslant 1. \end{aligned} \end{aligned}$$

Therefore, the following inequality:

$$\begin{aligned} w^n\leqslant \Psi _n+\Gamma (2-\alpha )\tau ^\alpha \sum _{j=1}^np_{n-j}(\lambda _1w^j+\lambda _2w^{j-1}),\,n\geqslant 1 \end{aligned}$$

holds, where

$$\begin{aligned} \Psi _n{:}{=}\,w^0+\frac{(\log t_n-\log a)^\alpha }{\Gamma (1+\alpha )}\max _{1\leqslant j\leqslant n}g^j. \end{aligned}$$

Note that \(\Psi _n\geqslant \Psi _k\) for \(n\geqslant k\geqslant 1\). Choosing \(\tau \) such that \(1-\Gamma (2-\alpha )\tau ^\alpha \lambda _1\geqslant \frac{1}{2}\), i.e., \(\tau \leqslant \root \alpha \of {\frac{1}{2\Gamma (2-\alpha )\lambda _1}}\), we have

$$\begin{aligned} w^n\leqslant 2\Psi _n+2\Gamma (2-\alpha )\left( \lambda _1\tau ^\alpha \sum _{j=1}^{n-1}p_{n-j}w^j+ \lambda _2\tau ^\alpha \sum _{j=1}^np_{n-j}w^{j-1}\right) ,\,n\geqslant 1. \end{aligned}$$
(13)

Define the vector \(W=(w^n,w^{n-1},\cdots ,w^1)^\top \). Then, (13) can be written in a matrix form as

$$\begin{aligned} W\leqslant (\lambda _1J_1+\lambda _2J_2)W+2\Psi _n\textbf{e}, \end{aligned}$$
(14)

where

$$\begin{aligned} J_1=2\Gamma (2-\alpha )\tau ^\alpha \begin{pmatrix} 0&{}p_1&{}\cdots &{}p_{n-2}&{}p_{n-1}\\ 0&{}0&{}\cdots &{}p_{n-3}&{}p_{n-2}\\ \vdots &{}\vdots &{}&{}\vdots &{}\vdots \\ 0&{}0&{}\cdots &{}0&{}p_1\\ 0&{}0&{}\cdots &{}0&{}0 \end{pmatrix}_{n\times n}, \end{aligned}$$

and

$$\begin{aligned} J_2=2\Gamma (2-\alpha )\tau ^\alpha \begin{pmatrix} 0&{}p_0&{}\cdots &{}p_{n-3}&{}p_{n-2}\\ 0&{}0&{}\cdots &{}p_{n-4}&{}p_{n-3}\\ \vdots &{}\vdots &{} &{}\vdots &{}\vdots \\ 0&{}0&{}\cdots &{}0&{}p_0\\ 0&{}0&{}\cdots &{}0&{}0 \end{pmatrix}_{n\times n}. \end{aligned}$$

It follows from Lemma 2 that

$$\begin{aligned} p_i\leqslant \frac{1}{a_0-a_1}p_{i+1}=\frac{1}{2-2^{1-\alpha }}p_{i+1},\,i\geqslant 0. \end{aligned}$$

Thus, we obtain

$$\begin{aligned} J_2W\leqslant \frac{1}{2-2^{1-\alpha }}J_1W. \end{aligned}$$
(15)

Let \(\lambda =\lambda _1+\frac{\lambda _2}{2-2^{1-\alpha }}\) in (7). Then, combine (14) and (15) to obtain

$$\begin{aligned} \begin{aligned} W&\leqslant JW+2\Psi _n\textbf{e}\\&\leqslant J(JW+2\Psi _n\textbf{e})+2\Psi _n\textbf{e}\\&= J^2W+2\Psi _n\sum _{j=0}^1J^j\textbf{e}\\&\leqslant \cdots \\&\leqslant J^n W+2\Psi _n\sum _{j=0}^{n-1}J^j\textbf{e}, \end{aligned} \end{aligned}$$
(16)

which together with (i) in Lemma 3, one has

$$\begin{aligned} W\leqslant 2\Psi _n\sum _{j=0}^{n-1}J^j\textbf{e}. \end{aligned}$$

The conclusion of this result readily follows from Lemma 3 (iii).

The above Gronwall Inequality I (see Lemma 4) is useful for numerical analysis if the uniform L1 formula in the logarithm sense is used.

2.2 Nonuniform L1 Approximation in the Sense of Logarithm

The nonuniform L1 temporal discretization of the Caputo-Hadamard derivative in the time interval [aT] is first introduced. For the given \(T>a\), let M be a positive integer and define

$$\begin{aligned} t_{n}=a\left( \frac{T}{a}\right) ^{(n/M)^{r}},\, r\geqslant 1,\, n=0,1,\cdots ,M. \end{aligned}$$

The temporal interval \([\log a,\log T]\) is divided into \(\log a=\log t_{0}<\log t_{1}<\cdots<\log t_{n-1}<\log t_{n}<\cdots <\log t_{M}=\log T\) with

$$\begin{aligned} \log t_{n}=\log a +\left( \log \frac{T}{a}\right) \left( \frac{n}{M}\right) ^{r}, \end{aligned}$$

which can be regarded as grading meshes in the sense of logarithm. Let \(\tau _n=\log t_n-\log t_{n-1}\), \(n=1,2,\cdots ,M\). Then, according to the above definition, it is easy to see that

$$\begin{aligned} \log t_{n+1}-\log t_{n}=\left( \log \frac{T}{a}\right) M^{-r}[(n+1)^{r}-n^{r}] \leqslant C\left( \log \frac{T}{a}\right) M^{-r}n^{r-1}. \end{aligned}$$
(17)

The above nonuniform L1 approximation for the Caputo-Hadamard time-fractional derivative of order \(\alpha \in (0, 1)\) at the time \(t_n\, (1\leqslant n\leqslant M)\) is defined as [14]

$$\begin{aligned} _{\text{CH}}\mathrm{{D}}^{\alpha }_{a,t}u(x, t)\Big |_{t=t_{n}}&=\frac{1}{\Gamma (1-\alpha )} \int _{a}^{t_{n}} \left( \log \frac{t_{n}}{w}\right) ^{-\alpha }\delta u(x, w) \frac{\textrm{d}w}{w} \nonumber \\&=\frac{1}{\Gamma (1-\alpha )}\sum _{i=0}^{n-1}\int _{t_{i}}^{t_{i+1}} \left( \log \frac{t_{n}}{w}\right) ^{-\alpha }\delta u(x, w) \frac{\textrm{d}w}{w} \nonumber \\&= \frac{1}{\Gamma (1-\alpha )}\sum _{i=0}^{n-1}\frac{u(x, t_{i+1}) -u(x, t_{i})}{\log \frac{t_{i+1}}{t_{i}}}\int _{t_{i}}^{t_{i+1}} \left( \log \frac{t_{n}}{w}\right) ^{-\alpha } \frac{\textrm{d}w}{w}+\Upsilon ^{n} \nonumber \\&= \frac{1}{\Gamma (2-\alpha )}\sum _{i=0}^{n-1} \frac{u(x, t_{i+1})-u(x, t_{i})}{\log \frac{t_{i+1}}{t_{i}}} \left( \left( \log \frac{t_{n}}{t_{i}}\right) ^{1-\alpha } -\left( \log \frac{t_{n}}{t_{i+1}}\right) ^{1-\alpha }\right) +\Upsilon ^{n} \nonumber \\&= \frac{1}{\Gamma (2-\alpha )}\left( b_{n,1}u(x, t_{n})-b_{n,n}u(x, t_{0}) -\sum _{i=1}^{n-1}(b_{n,i}-b_{n,i+1})u(x, t_{n-i})\right) +\Upsilon ^{n} \nonumber \\&{:}{=}\, \Lambda _{\log }^{\alpha }u(x, t_{n})+\Upsilon ^{n}, \end{aligned}$$
(18)

where the discrete coefficients are defined by

$$\begin{aligned} b_{n,i}=\frac{\left( \log \frac{t_{n}}{t_{n-i}}\right) ^{1-\alpha } -\left( \log \frac{t_{n}}{t_{n-i+1}}\right) ^{1-\alpha }}{\log \frac{t_{n-i+1}}{t_{n-i}}},\, i=1,2,\cdots ,n, \end{aligned}$$
(19)

and the local truncation error is defined by

$$\begin{aligned} \Upsilon ^{n}=\frac{1}{\Gamma (1-\alpha )}\sum _{i=0}^{n-1}\int _{t_{i}}^{t_{i+1}} \left( \log \frac{t_{n}}{w}\right) ^{-\alpha } \left( \frac{u(x, t_{i+1})-u(x, t_{i})}{\log \frac{t_{i+1}}{t_{i}}}-\delta u(x, w)\right) \frac{\textrm{d}w}{w}. \end{aligned}$$
(20)

We now list some lemmas which have been proved in [7, 14].

Lemma 5

[14] For \(0<\alpha <1\), the coefficients \(b_{n,i}\ (1\leqslant i\leqslant n, 1\leqslant n\leqslant M)\) in (19) satisfy

$$\begin{aligned} b_{n,1}>b_{n,2}>\cdots>b_{n,i}>b_{n,i+1}>\cdots>b_{n,n}>0, \end{aligned}$$

and

$$\begin{aligned} (1-\alpha )\left( \log \frac{t_{n}}{t_{n-i}}\right) ^{-\alpha }\leqslant b_{n,i} \leqslant (1-\alpha )\left( \log \frac{t_{n}}{t_{n-i+1}}\right) ^{-\alpha }. \end{aligned}$$

Lemma 6

[14] Let \(0<\alpha <1\). If

$$\begin{aligned} \left| \delta ^{l} u(x,t)\right| \leqslant C\left( 1+\Big (\log \frac{t}{a}\Big )^{\alpha -l}\right) \end{aligned}$$
(21)

holds for \(l=0,1,2,\;\textrm{and}\;t\in (a,T]\), then one has

$$\begin{aligned} |\Upsilon ^{n}|\leqslant Cn^{-\min \{2-\alpha ,\, r\alpha \}}. \end{aligned}$$
(22)

Lemma 7

[14] For \(\beta \leqslant r\alpha \), it holds that

$$\begin{aligned} \left( \log \frac{t_{n}}{t_{n-1}}\right) ^{\alpha } \sum _{l=1}^{n}l^{-\beta }\theta _{n,l} \leqslant \frac{M^{-\beta }}{1-\alpha } \left( \log \frac{T}{a}\right) ^{\alpha },\, n=1,2,\cdots ,M, \end{aligned}$$

where \(\theta _{n,n}=1\), \(\theta _{n,l}=\sum \limits _{\rho =1}^{n-l} \left( \log \frac{t_{n-\rho }}{t_{n-\rho -1}}\right) ^{\alpha } (b_{n,\rho }-b_{n,\rho +1})\theta _{n-\rho ,l}\), \(l=1,2,\cdots ,n-1\).

Lemma 8

[7] Let f and g be integrable over the interval [ab] with f monotonically decreasing and g monotonically increasing. Then,

$$\begin{aligned} \frac{1}{b-a}\int _a^bf(x)g(x)\textrm{d}x\leqslant \left( \frac{1}{b-a}\int _a^bf(x)\textrm{d}x\right) \left( \frac{1}{b-a}\int _a^bg(x)\textrm{d}x\right) . \end{aligned}$$

In the following, we derive several important lemmas which are useful in the stability analysis and error estimates later on.

Lemma 9

For \(n=1,2,\cdots ,M\) and \(1\leqslant k\leqslant n\), one has

$$\begin{aligned} \sum _{j=k}^nb_{j,j+1-k}\theta _{n,j}=\tau _n^{-\alpha }. \end{aligned}$$
(23)

Proof

If \(n=1\), then \(k=1\) and \(b_{1,1}\theta _{1,1}=b_{1,1}=\tau _1^{-\alpha }\), which is true.

Assuming that (23) holds for \(n=1,2,\cdots ,i-1\), \(i\geqslant 2\), and \(1\leqslant k\leqslant n\), we only need to prove (23) for \(n=i\) and \(1\leqslant k\leqslant i\).

If \(n=k=i\), then \(b_{i,1}\theta _{i,i}=b_{i,1}=\tau _i^\alpha \), which implies the desired equality. If \(n=i\) and \(1\leqslant k \leqslant i-1\), we deduce from the inductive hypothesis that

$$\begin{aligned} \begin{aligned} \sum _{j=k}^ib_{j,j+1-k}\theta _{i,j}&= \sum _{j=k}^{i-1}b_{j,j+1-k}\left( \sum _{l=1}^{i-j}\tau _{i-l}^\alpha (b_{i,l}-b_{i,l+1})\theta _{i-l,j}\right) +b_{i,i+1-k}\\&=\sum _{l=1}^{i-k}\tau _{i-l}^\alpha (b_{i,l}-b_{i,l+1})\left( \sum _{j=k}^{i-l}b_{j,j+1-k}\theta _{i-l,j}\right) +b_{i,i+1-k}\\&=\sum _{l=1}^{i-k}\tau _{i-l}^{-\alpha }\tau _{i-l}^\alpha (b_{i,l}-b_{i,l+1})+b_{i,i+1-k}\\&=b_{i,1}=\tau _i^{-\alpha }. \end{aligned} \end{aligned}$$
(24)

The proof is complete.

Lemma 10

Let \(\gamma \in (0,1)\) be a constant. Then, for \(n=1,2,\cdots ,M\), it holds that

$$\begin{aligned} \tau _n^\alpha \sum _{j=1}^nj^{r(\gamma -\alpha )}\theta _{n,j} \leqslant \frac{\Gamma (1+\gamma -\alpha )}{\Gamma (1+\gamma )\Gamma (2-\alpha )}\left( \log \frac{T}{a}\right) ^\alpha \left( \frac{\log t_n-\log a}{\log T-\log a}\right) ^\gamma M^{r(\gamma -\alpha )}. \end{aligned}$$

Proof

According to the definition of the Caputo-Hadamard derivative, one has

$$\begin{aligned} \begin{aligned} _{\text{CH}}\mathrm{{D}}^{\alpha }_{a,t}\left( \log \frac{t}{a}\right) ^\gamma&=\frac{1}{\Gamma (1-\alpha )}\int _a^t\left( \log \frac{t}{s}\right) ^{-\alpha }\delta \left( \log \frac{s}{a}\right) ^\gamma \frac{\textrm{d}s}{s}\\&=\frac{\Gamma (\gamma +1)(\log t-\log a)^{\gamma -\alpha }}{\Gamma (\gamma +1-\alpha )}. \end{aligned} \end{aligned}$$
(25)

Then, applying Lemma 8 and (25), we have

$$\begin{aligned} \begin{aligned}&\quad\quad\frac{\Gamma (\gamma +1)}{\Gamma (\gamma +1-\alpha )}(\log t_j-\log a)^{\gamma -\alpha }\\&\quad =\frac{1}{\Gamma (1-\alpha )}\int _a^{t_j}(\log t_j-\log s)^{-\alpha }\delta (\log s-\log a)^\gamma \frac{\textrm{d}s}{s}\\&\quad =\frac{\gamma }{\Gamma (1-\alpha )}\sum _{k=1}^j\int _{\log t_{k-1}}^{\log t_k}(\log t_j-s)^{-\alpha }(s-\log a)^{\gamma -1}\textrm{d}s\\&\quad \leqslant \frac{1}{\Gamma (2-\alpha )}\sum _{k=1}^j\left( (\log t_j-\log t_{k-1})^{1-\alpha }-(\log t_j-\log t_{k})^{1-\alpha }\right) \\&\qquad \times \frac{(\log t_k-\log a)^\gamma -(\log t_{k-1}-\log a)^\gamma }{\tau _k}\\&\quad =\frac{1}{\Gamma (2-\alpha )}\sum _{k=1}^jb_{j,j+1-k}\left( (\log t_k-\log a)^\gamma -(\log t_{k-1}-\log a)^\gamma \right) . \end{aligned} \end{aligned}$$
(26)

Utilizing Lemma 9, we can get from (26) that

$$\begin{aligned} \begin{aligned}&\quad\quad\frac{\Gamma (\gamma +1)}{\Gamma (\gamma +1-\alpha )}\sum _{j=1}^n(\log t_j-\log a)^{\gamma -\alpha }\theta _{n,j}\\&\quad \leqslant \frac{1}{\Gamma (2-\alpha )}\sum _{j=1}^n\theta _{n,j}\sum _{k=1}^jb_{j,j+1-k}\left( (\log t_k-\log a)^\gamma -(\log t_{k-1}-\log a)^\gamma \right) \\&\quad =\frac{1}{\Gamma (2-\alpha )}\sum _{k=1}^n\left( (\log t_k-\log a)^\gamma -(\log t_{k-1}-\log a)^\gamma \right) \left( \sum _{j=k}^n\theta _{n,j}b_{j,j+1-k}\right) \\&\quad =\frac{1}{\tau _n^\alpha \Gamma (2-\alpha )}(\log t_n-\log a)^\gamma , \end{aligned} \end{aligned}$$
(27)

which implies

$$\begin{aligned} \tau _n\sum _{j=1}^nj^{r(\gamma -\alpha )}\theta _{n,j} \leqslant \frac{\Gamma (\gamma +1-\alpha )}{\Gamma (\gamma +1)\Gamma (2-\alpha )}\left( \log \frac{T}{a}\right) ^\alpha M^{r(\gamma -\alpha )} \left( \frac{\log t_n-\log a}{\log T-\log a}\right) ^\gamma . \end{aligned}$$

All these complete the proof.

Corollary 1

The following relation holds:

$$\begin{aligned} \tau _n^\alpha \sum _{j=1}^n\theta _{n,j}\leqslant \frac{(\log t_n-\log a)^\alpha }{\Gamma (1+\alpha )\Gamma (2-\alpha )}. \end{aligned}$$
(28)

Proof

Taking \(\gamma =\alpha \) in Lemma 10 gives (28).

Corollary 2

Let \(l_M=\frac{1}{\log M}\) and \(m^*=\min \{2-\alpha ,r\alpha \}\). Then, for \(r\leqslant 2(2-\alpha )\) and \(M\geqslant 8\), one has

$$\begin{aligned} \begin{aligned} \tau _n^\alpha \sum _{j=1}^nj^{-m^\star }\theta _{n,j}&\leqslant \frac{{\text{e}}^r\Gamma (1+l_M-m^\star /r)}{\Gamma (1+l_M+\alpha -m^\star /r)\Gamma (2-\alpha )}\left( \log \frac{T}{a}\right) \\&\quad \times \left( \frac{\log t_n-\log a}{\log T-\log a}\right) ^{l_M+\alpha -m^\star /r}M^{-m^\star }. \end{aligned} \end{aligned}$$
(29)

Proof

Denote

$$\begin{aligned} \sigma =l_M+\alpha -\frac{m^\star }{r}. \end{aligned}$$

From [3, Corollary 5.2], it follows that for \(r\leqslant 2(2-\alpha )\) and \(M\geqslant 8\), one can deduce \(\sigma \in (0,1)\). Therefore, after using Lemma 10 with \(\gamma =\sigma \), we have the estimate

$$\begin{aligned} \begin{aligned} \tau _n^\alpha \sum _{j=1}^nj^{rl_M-m^\star }\theta _{n,j}\leqslant\, &\frac{\Gamma (1+l_M-m^\star /r)}{\Gamma (1+l_M+\alpha -m^\star /r)\Gamma (2-\alpha )} \left( \log \frac{T}{a}\right) \\&\times \left( \frac{\log t_n-\log a}{\log T-\log a}\right) ^\sigma M^{-m^\star +rl_M}. \end{aligned} \end{aligned}$$
(30)

Noting that \(1\leqslant j^{rl_M}\leqslant M^{rl_M}={\text{e}}^r\) and (30), we complete the proof.

Lemma 11

(Gronwall Inequality II) Assume that the sequences \(\{\xi ^n\}_{n=1}^\infty \) and \(\{\eta ^n\}_{n=1}^\infty \) are nonnegative and the grid function \(\{w^n\!{:}\,n=0,1,\cdots ,M\}\) satisfies \(w^0\geqslant 0\) and

$$\begin{aligned} (\Lambda _{\log }^{\alpha }w^n)w^n\leqslant \xi ^nw^n+(\eta ^n)^2,\,n=1,2,\cdots ,M. \end{aligned}$$
(31)

Then,

$$\begin{aligned} w^n\leqslant w^0+\tau _n^\alpha \Gamma (2-\alpha )\sum _{j=1}^n\theta _{n,j}(\xi ^j+\eta ^j)+\max _{1\leqslant j\leqslant n}\{\eta ^j\},\,n=0,1,\cdots , M. \end{aligned}$$
(32)

Proof

We prove this lemma by the mathematical induction. When \(n=0\), (32) obviously holds.

Suppose that the following inequalities:

$$\begin{aligned} w^n\leqslant w^0+\tau _n^\alpha \Gamma (2-\alpha )\sum _{j=1}^n\theta _{n,j}(\xi ^j+\eta ^j)+\max _{1\leqslant j\leqslant n}\{\eta ^j\},\,n=0,1,\cdots , k-1 \end{aligned}$$
(33)

hold. Then, we need to prove

$$\begin{aligned} w^k\leqslant w^0+\tau _k^\alpha \Gamma (2-\alpha )\sum _{j=1}^k\theta _{k,j}(\xi ^j+\eta ^j)+\max _{1\leqslant j\leqslant k}\{\eta ^j\}. \end{aligned}$$
(34)

If \(\displaystyle w^k\leqslant \max _{1\leqslant j\leqslant k}\{\eta ^j\}\), from the nonnegativity of \(w^0\), \(\tau _k\), \(\theta _{k,j}\), \(\xi ^k\), and \(\eta ^k\), we can directly obtain (34). If \(\displaystyle w^k> \max _{1\leqslant j \leqslant k}\{\eta ^j\}\), we can deduce from (31) that

$$\begin{aligned} (\Lambda _{\log }^{\alpha }w^k)w^k\leqslant \xi ^kw^k+\eta ^kw^k, \end{aligned}$$

which implies that

$$\begin{aligned} \Lambda _{\log }^{\alpha }w^k\leqslant \xi ^k+\eta ^k. \end{aligned}$$

Using the definition of the operator \(\Lambda _{\log }^{\alpha }\), we get

$$\begin{aligned} \frac{b_{k,1}}{\Gamma (2-\alpha )}w^k-\frac{b_{k,k}}{\Gamma (2-\alpha )}w^0 +\frac{1}{\Gamma (2-\alpha )}\sum _{i=1}^{k-1}(b_{k,i+1}-b_{k,i})w^{k-i}\leqslant \xi ^k+\eta ^k. \end{aligned}$$

Noting that \(b_{k,1}=\tau _k^{-\alpha }\), we obtain

$$\begin{aligned} w^k \leqslant \tau _k^\alpha \left( (\xi ^k+\eta ^k)\Gamma (2-\alpha )+b_{k,k}w^0+\sum _{i=1}^{k-1}(b_{k,i}-b_{k,i+1})w^{k-i}\right) . \end{aligned}$$
(35)

By the mathematical induction, we have

$$\begin{aligned} \begin{aligned} w^k&\leqslant \tau _k^\alpha \Bigg[(\xi ^k+\eta ^k)\Gamma (2-\alpha )+b_{k,k}w^0+\sum _{i=1}^{k-1}(b_{k,i}-b_{k,i+1})\\&\quad \times \Big (w^0 +\tau _{k-i}^\alpha \Gamma (2-\alpha )\sum _{j=1}^{k-i}\theta _{k-i,j}(\xi ^j+\eta ^j)+\max _{1\leqslant j\leqslant k-i}\{\eta ^j\}\Big )\Bigg]\\&\leqslant \tau _k^\alpha \Bigg[b_{k,1}w^0+(\xi ^k+\eta ^k)\Gamma (2-\alpha )+\Gamma (2-\alpha )\sum _{j=1}^{k-1}(\xi ^j+\eta ^j)\\&\quad \times \Big (\sum _{i=1}^{k-j}\tau _{k-i}^\alpha (b_{k,i}-b_{k,i+1})\theta _{k-i,j}\Big )\Bigg] +\tau _k^\alpha (b_{k,1}-b_{k,k})\max _{1\leqslant j\leqslant k-1}\{\eta ^j\}\\&\leqslant w^0+\tau _k^\alpha \Gamma (2-\alpha )\sum _{j=1}^k\theta _{k,j}(\xi ^j+\eta ^j)+\max _{1\leqslant j\leqslant k}\{\eta ^j\}. \end{aligned} \end{aligned}$$
(36)

The proof is ended.

The above Gronwall Inequality IĪ (see Lemma 11) is useful for numerical analysis if the nonuniform L1 formula in the logarithm sense is utilized.

3 Application to the Caputo-Hadamard Time Fractional Diffusion Equation

In this section, we reconsider the LDG method coupled with L1 time discretizations (including uniform and nonuniform situations (for the nonuniform case, see [14])) for solving the Caputo-Hadamard time fractional diffusion equation (3). Let us first introduce some of the notations commonly used in the LDG method.

3.1 Notations and Projections

Let \(\mathcal {T}_h=\{I_j\}_{j=1}^N\) be a quasi-uniform partition of space domain \(\varOmega =(0,l)\), where each cell \(I_j=(x_{j-1/2},x_{j+1/2})\) has length \(h_j=x_{j+1/2}-x_{j-1/2}\). The center of an element is \(x_j=(x_{j-1/2},x_{j+1/2})/2\) and the maximum cell length is \(h=\max _{1\leqslant j\leqslant N}h_j\). Let \(\mathcal {P}^k(I_j)\) be the space of polynomials of degree at most \(k\geqslant 0\) on \(I_j\). The finite element space associated with the mesh is of the form

$$\begin{aligned} V_{h}=\left\{ v\in L^{2}(\varOmega){:}\, v|_{I_{j}}\in \mathcal {P}^{k}(I_{j}),\,j=1,2,\cdots ,N\right\} . \end{aligned}$$

We denote by \(v^+_{j+1/2}\) and \(v^-_{j+1/2}\) the value of u at \(x_{j+1/2}\), from the right cell \(I_{j+1}\), and from the left cell \(I_j\), respectively.

In what follows, we will use two Gauss-Radau projections \(\mathscr {P}_{h}^{\pm }\) from the Sobolev space \(H^1(\varOmega )\) onto the finite element space \(V_h\), defined as in [2, 11, 15,16,17,18,19,20]. Given a function \(q\in H^1(\varOmega )\) and an arbitrary subinterval \(I_j=(x_{j-1/2},x_{j+1/2})\), the restrictions of \(\mathscr {P}_{h}^{\pm } q\) to \(I_j\) are defined as the elements of \(\mathcal {P}^k(I_j)\) that satisfy

$$\begin{aligned}{} & {} \int _{I_{j}}(\mathscr {P}_{h}^{+}q-q)v_{h}\textrm{d}x=0,\, \forall \, v_{h}\in P^{k-1}(I_{j}),\ \ \ \ (\mathscr {P}_{h}^{+}q)_{j-\frac{1}{2}}^{+}=q(x_{j-\frac{1}{2}}^{+}),\end{aligned}$$
(37)
$$\begin{aligned}{} & {} \int _{I_{j}}(\mathscr {P}_{h}^{-}q-q)v_{h}\textrm{d}x=0,\, \forall \, v_{h}\in P^{k-1}(I_{j}),\ \ \ \ (\mathscr {P}_{h}^{-}q)_{j+\frac{1}{2}}^{-}=q(x_{j+\frac{1}{2}}^{-}). \end{aligned}$$
(38)

For the projections mentioned above, it can be obtained from the literature [4, 28, 29] that

$$\begin{aligned} \Vert \mathscr {P}_{h}^{\pm }q-q\Vert \leqslant Ch^{k+1}, \end{aligned}$$
(39)

where C is a positive constant dependent on q and independent of the spatial step h. Here, we use \((\cdot ,\cdot )_\varOmega \) to represent the standard \(L^2\)-inner product on \(\varOmega \) and \(\Vert \cdot \Vert \) to represent the \(L^2\)-norm.

3.2 Uniform L1/LDG Scheme

In the current subsection, we assume that the solution u(xt) of equation (3) has the following smoothness property:

$$\begin{aligned} u(x,t)\in C^2\left( [0,T];\,H^{k+1}(\varOmega )\right) , \end{aligned}$$
(40)

which is possible since such a kind of smooth solution can be constructed.

We will apply the L1 method on uniform meshes introduced in Sect. 2.1 to the temporal discretization and the LDG method to the spatial discretization. To define the LDG scheme, we rewrite (3) as a system containing only first order derivatives

$$\begin{aligned} \left\{ \begin{aligned}&_{\text{CH}}\mathrm{{D}}^{\alpha }_{a,t}u-p_{x}=f, \\&p=u_{x}. \end{aligned}\right. \end{aligned}$$
(41)

Then, the weak form of (3) at \(t_n\) is formulated as

$$\begin{aligned} \left\{ \begin{aligned}&(_{\text{CH}}\mathrm{{D}}^{\alpha }_{a,t}u^{n}, v)+(p^{n}, v_{x}) -\sum _{j=1}^{N}\left( (p^{n}v^{-})_{j+\frac{1}{2}} -(p^{n}v^{+})_{j-\frac{1}{2}}\right) =(f^{n}, v), \\&(p^{n}, w)+(u^{n}, w_{x})-\sum _{j=1}^{N} \left( (u^{n}w^{-})_{j+\frac{1}{2}} -(u^{n}w^{+})_{j-\frac{1}{2}}\right) =0, \end{aligned}\right. \end{aligned}$$
(42)

where \(v,w\in H^1(\varOmega )\) are test functions.

The uniform L1/LDG scheme to solve (51) is as follows: find \(u_{h}^{n}, p_{h}^{n} \in V_{h}\) such that for all test functions \( v_{h}, w_{h} \in V_{h},\)

$$\begin{aligned} \left\{ \begin{aligned}&(\delta _{\log }^{\alpha }u_{h}^{n}, v_{h})+\big (p_{h}^{n}, (v_{h})_{x}\big ) -\sum _{j=1}^{N}\left( (\widehat{p}_{h}^{n}v_{h}^{-})_{j+\frac{1}{2}} -(\widehat{p}_{h}^{n}v_{h}^{+})_{j-\frac{1}{2}}\right) =(f^{n}, v_{h}), \\&(p_{h}^{n}, w_{h})+\big (u_{h}^{n}, (w_{h})_{x}\big )-\sum _{j=1}^{N} \left( (\widehat{u}_{h}^{n}w_{h}^{-})_{j+\frac{1}{2}} -(\widehat{u}_{h}^{n}w_{h}^{+})_{j-\frac{1}{2}}\right) =0. \end{aligned}\right. \end{aligned}$$
(43)

The “hat” terms in (43) resulting from the integration by parts are the so-called “numerical fluxes”, a single-valued function defined at the cell boundary, which should be designed to ensure the stability. Here, we take the alternating numerical fluxes,

$$\begin{aligned} \widehat{u}_{h}^{n}=(u_{h}^{n})^{-},\ \ \ \widehat{p}_{h}^{n}=(p_{h}^{n})^{+}. \end{aligned}$$
(44)

Theorem 1

(Numerical stability) The solution \(u_h^n\) of the uniform L1/LDG scheme (43) satisfies

$$\begin{aligned} \Vert u_h^n\Vert \leqslant 2\left( \Vert u_h^0\Vert +\frac{(\log t_n-\log a)^\alpha }{\Gamma (1+\alpha )}\max _{0\leqslant j\leqslant n}\Vert f^j\Vert \right) . \end{aligned}$$
(45)

Proof

Taking the test functions in (43) as \(v_h=u_h^n\), \(w_h=p_h^n\), we obtain

$$\begin{aligned} \left\{ \begin{aligned}&(\delta _{\log }^{\alpha }u_{h}^{n}, u_h^n)+\big (p_{h}^{n}, (u_h^n)_{x}\big ) -\sum _{j=1}^{N}\left( (\widehat{p}_{h}^{n}(u_h^n)^{-})_{j+\frac{1}{2}} -(\widehat{p}_{h}^{n}(u_h^n)^{+})_{j-\frac{1}{2}}\right) =(f^{n}, u_h^n), \\&(p_{h}^{n}, p_h^n)+\big (u_{h}^{n}, (p_h^n)_{x}\big )-\sum _{j=1}^{N} \left( (\widehat{u}_{h}^{n}(p_h^n)^{-})_{j+\frac{1}{2}} -(\widehat{u}_{h}^{n}(p_h^n)^{+})_{j-\frac{1}{2}}\right) =0. \end{aligned}\right. \end{aligned}$$
(46)

Integrating by parts, we can get

$$\begin{aligned} (\delta _{\log }^{\alpha }u_{h}^{n}, u_{h}^{n}) +\Vert p_{h}^{n}\Vert ^{2}=(f^{n}, u_{h}^{n}). \end{aligned}$$
(47)

By the definition of \(\delta _{\log }^{\alpha }\) in (39), we have

$$\begin{aligned} \begin{aligned} (\delta _{\log }^{\alpha }u_{h}^{n}, u_h^n)&=\frac{\tau ^{-\alpha }}{\Gamma (2-\alpha )} \left[ (u_h^n,u_h^n)-\sum _{j=1}^{n-1}(a_{n-j-1}-a_{n-j})(u_h^j,u_h^n)-a_{n-1}(u_h^0,u_h^n)\right] \\&\geqslant (\delta _{\log }^{\alpha }\Vert u_{h}^{n}\Vert )\Vert u_h^n\Vert . \end{aligned} \end{aligned}$$
(48)

Substituting (48) into (47) and using the Cauchy-Schwarz inequality, we obtain

$$\begin{aligned} \delta _{\log }^{\alpha }\Vert u_{h}^{n}\Vert \leqslant \Vert f^n\Vert . \end{aligned}$$
(49)

Now, employing the Gronwall Inequality I (see Lemma 4) with \(\lambda _1=\lambda _2=0\), \(w^n=\Vert u_h^n\Vert \), and \(g^n=\Vert f^n\Vert \), there holds

$$\begin{aligned} \Vert u_h^n\Vert \leqslant 2\left( \Vert u_h^0\Vert +\frac{(\log t_n-\log a)^\alpha }{\Gamma (1+\alpha )}\max _{0\leqslant j\leqslant n}\Vert f^j\Vert \right) . \end{aligned}$$

This finishes the proof of the theorem.

Theorem 2

(Error estimate) Let \(u(x,t_n)\) be the exact solution of (3) and \(u_h^n\) be the numerical solution of the uniform L1/LDG scheme (43), respectively. Assume that u(xt) satisfies the regularity assumption (40). Then, the following error estimate holds:

$$\begin{aligned} \Vert u^{n}-u_{h}^{n}\Vert \leqslant C(h^{k+1}+\tau ^{2-\alpha }), \ \ 1\leqslant n\leqslant M, \end{aligned}$$
(50)

where the bounding constant C is independent of \(\tau \) and h.

Proof

For simplicity, we first split the error into projection error and difference error, i.e.,

$$\begin{aligned}{} & {} e_{u}^{n}=u^{n}-u_{h}^{n}=(u^{n}-\mathscr {P}_{h}^{-}u^{n}) +(\mathscr {P}_{h}^{-}u^{n}-u_{h}^{n}){:}{=}\eta _{u}^{n}+\xi _{u}^{n}, \end{aligned}$$
(51)
$$\begin{aligned}{} & {} e_{p}^{n}=p^{n}-p_{h}^{n}=(p^{n}-\mathscr {P}_{h}^{+}p^{n}) +(\mathscr {P}_{h}^{+}p^{n}-p_{h}^{n}){:}{=}\eta _{p}^{n}+\xi _{p}^{n}. \end{aligned}$$
(52)

The weak form (42) and the fully discrete scheme (43) lead to the following error equation:

$$\begin{aligned} \left\{ \begin{aligned}&(_{\text{CH}}\mathrm{{D}}^{\alpha }_{a,t}u^{n}-\delta _{\log }^{\alpha }u_{h}^{n}, v_{h}) +\big (e_{p}^{n}, (v_{h})_{x}\big ) -\sum _{j=1}^{N}\left( \big ((e_{p}^{n})^{+}v_{h}^{-}\big )_{j+\frac{1}{2}} -\big ((e_{p}^{n})^{+}v_{h}^{+}\big )_{j-\frac{1}{2}}\right) =0, \\&(e_{p}^{n}, w_{h})+\big (e_{u}^{n}, (w_{h})_{x}\big )-\sum _{j=1}^{N} \left( \big ((e_{u}^{n})^{-}w_{h}^{-}\big )_{j+\frac{1}{2}} -\big ((e_{u}^{n})^{-}w_{h}^{+}\big )_{j-\frac{1}{2}}\right) =0. \end{aligned}\right. \end{aligned}$$
(53)

Using the error relations (51) and (52), we can rewrite (53) as

$$\begin{aligned} \left\{ \begin{aligned}&(\delta _{\log }^{\alpha }\xi _{u}^{n}, v_{h}) +\big (\xi _{p}^{n}, (v_{h})_{x}\big ) -\sum _{j=1}^{N}\left( \big ((\xi _{p}^{n})^{+}v_{h}^{-}\big )_{j+\frac{1}{2}} -\big ((\xi _{p}^{n})^{+}v_{h}^{+}\big )_{j-\frac{1}{2}}\right) \\& =(R^{n}, v_{h})-(\delta _{\log }^{\alpha }\eta _{u}^{n}, v_{h}) -\big (\eta _{p}^{n}, (v_{h})_{x}\big ) +\sum _{j=1}^{N}\left( \big ((\eta _{p}^{n})^{+}v_{h}^{-}\big )_{j+\frac{1}{2}} -\big ((\eta _{p}^{n})^{+}v_{h}^{+}\big )_{j-\frac{1}{2}}\right) , \\&(\xi _{p}^{n}, w_{h})+\big (\xi _{u}^{n}, (w_{h})_{x}\big )-\sum _{j=1}^{N} \left( \big ((\xi _{u}^{n})^{-}w_{h}^{-}\big )_{j+\frac{1}{2}} -\big ((\xi _{u}^{n})^{-}w_{h}^{+}\big )_{j-\frac{1}{2}}\right) \\& =-(\eta _{p}^{n}, w_{h})-\big (\eta _{u}^{n}, (w_{h})_{x}\big )+\sum _{j=1}^{N} \left( \big ((\eta _{u}^{n})^{-}w_{h}^{-}\big )_{j+\frac{1}{2}} -\big ((\eta _{u}^{n})^{-}w_{h}^{+}\big )_{j-\frac{1}{2}}\right) , \end{aligned}\right. \end{aligned}$$
(54)

where \(R^n\) is the truncation error defined in (4). Then, according to the definition of the projection operator in (37) and (38), we can get

$$\begin{aligned} \left\{ \begin{aligned}&(\delta _{\log }^{\alpha }\xi _{u}^{n}, v_{h}) +\big (\xi _{p}^{n}, (v_{h})_{x}\big ) -\sum _{j=1}^{N}\left( \big ((\xi _{p}^{n})^{+}v_{h}^{-}\big )_{j+\frac{1}{2}} -\big ((\xi _{p}^{n})^{+}v_{h}^{+}\big )_{j-\frac{1}{2}}\right) \\& =(R^{n}, v_{h})-(\delta _{\log }^{\alpha }\eta _{u}^{n}, v_{h}), \\&(\xi _{p}^{n}, w_{h})+\big (\xi _{u}^{n}, (w_{h})_{x}\big )-\sum _{j=1}^{N} \left( \big ((\xi _{u}^{n})^{-}w_{h}^{-}\big )_{j+\frac{1}{2}} -\big ((\xi _{u}^{n})^{-}w_{h}^{+}\big )_{j-\frac{1}{2}}\right) \\& =-(\eta _{p}^{n}, w_{h}). \end{aligned}\right. \end{aligned}$$
(55)

Taking the test functions in (55) as \(v_h=\xi _{u}^{n}\), \(w_h=\xi _p^n\), and integrating by parts, we have

$$\begin{aligned} (\delta _{\log }^{\alpha }\xi _{u}^{n}, \xi _{u}^{n})+\Vert \xi _{p}^{n}\Vert ^{2} =(R^{n}, \xi _{u}^{n})-(\delta _{\log }^{\alpha }\eta _{u}^{n}, \xi _{u}^{n}) -(\eta _{p}^{n}, \xi _{p}^{n}). \end{aligned}$$
(56)

Note that

$$\begin{aligned} \begin{aligned} (\delta _{\log }^{\alpha }\xi _{u}^{n}, \xi _{u}^{n})&=\frac{\tau ^{-\alpha }}{\Gamma (2-\alpha )} \left( \xi _u^n-\sum _{j=1}^{n-1}(a_{n-j-1}-a_{n-j})\xi _{u}^j-a_{n-1}\xi _{u}^0,\xi _{u}^n\right) \\&\geqslant \frac{\tau ^{-\alpha }}{\Gamma (2-\alpha )}\Bigg (\Vert \xi _u^n\Vert ^2-\sum _{j=1}^{n-1}(a_{n-j-1}-a_{n-j})\frac{\Vert \xi _u^j\Vert ^2+\Vert \xi _u^n\Vert ^2}{2}\\&\quad -a_{n-1}\frac{\Vert \xi _u^0\Vert ^2+\Vert \xi _u^n\Vert ^2}{2}\Bigg )\\&=\frac{1}{2}\delta _{\log }^{\alpha }\Vert \xi _{u}^{n}\Vert ^2. \end{aligned} \end{aligned}$$
(57)

Substituting (57) into (56) and using the interpolation property (39) and Lemma 1, we get

$$\begin{aligned} \delta _{\log }^{\alpha }\Vert \xi _u^n\Vert ^2\leqslant C_1\Vert \xi _u^n\Vert ^2+C_2(h^{2k+2}+\tau ^{4-2\alpha }). \end{aligned}$$

Then, applying the Gronwall Inequality I (see Lemma 4) with \(\lambda _1=C_1(>0)\), \(\lambda _2=0\), \(w^n=\Vert \xi _u^n\Vert ^2\), and \(g^n=C_2(h^{2k+2}+\tau ^{4-2\alpha })\), there exists a positive constant \(\tau ^*=\root \alpha \of {\frac{1}{2\Gamma (2-\alpha )C_1}}\) such that, when \(\tau \leqslant \tau ^*\),

$$\begin{aligned} \Vert \xi _u^n\Vert ^2\leqslant 2E_\alpha \left( 2C_1(\log t_n-\log a)^\alpha \right) \left( \Vert \xi _u^0\Vert ^2+C_2\frac{(\log t_n-\log a)^\alpha }{\Gamma (1+\alpha )}(h^{2k+2}+\tau ^{4-\alpha })\right) . \end{aligned}$$
(58)

Consequently, (50) follows by combining (58), the interpolation property (39), and the triangle inequality. This completes the proof.

Remark 1

It is worth mentioning that the bounds in Theorems 1 and 2 are robust, that is, when \(\alpha \rightarrow 1^-\) (i.e., \(\lim _{\alpha \rightarrow 1^-} \,_{\text{CH}}\mathrm{{D}}^{\alpha }_{a,t}u(x,t)=t\frac{\partial u(x, t)}{\partial t}\)), the coefficients on the right-hand sides of (45) and (50) do not blow up, which means that Theorems 1 and 2 remain valid for \(\alpha \rightarrow 1^-\). This fact essentially shows the consistency with \(\alpha =1\).

According to [14, Theorem 3.1], if the initial value \(u_a(x)\) and the source term f(xt) satisfy some regularity conditions, then the solution u(xt) of problem (3) has the following regularity estimate:

$$\begin{aligned} |\delta ^ku(x,t)|\leqslant C\left( 1+\left( \log \frac{t}{a}\right) ^{\alpha -k}\right) ,\,k=0,1,2. \end{aligned}$$
(59)

The bound (59) implies that the solution u(xt) may exhibit a low regularity at \(t=a\), i.e., \(\left|\frac{\partial u(x,t)}{\partial t}\right|\) and/or \(\left|\frac{\partial ^2 u(x,t)}{\partial t^2}\right|\) blow up as \(t\rightarrow a^+\) albeit \(u(\cdot ,t)\) is continuous on [aT]. To ensure the numerical accuracy of the algorithm in the time direction, Li et al. [14] employed the L1 method on nonuniform meshes to construct a nonuniform L1/LDG scheme for the Caputo-Hadamard fractional sub-diffusion equation (3). But the bounds obtained in both the stability analysis and the error estimate are \(\alpha \)-nonrobust [14]. In the following subsection, we will give a new proof using the established Gronwall Inequality II (see Lemma 11) to show that the nonuniform L1/LDG method is \(\alpha \)-robust.

3.3 Nonuniform L1/LDG Scheme

As stated in [14], the nonuniform L1/LDG scheme to solve (41) can be written into the following form: find \(u_{h}^{n}, p_{h}^{n} \in V_{h}\) such that for all test functions \( v_{h}, w_{h} \in V_{h},\)

$$\begin{aligned} \left\{ \begin{aligned}&(\Lambda _{\log }^{\alpha }u_{h}^{n}, v_{h})+\big (p_{h}^{n}, (v_{h})_{x}\big ) -\sum _{j=1}^{N}\left( (\widehat{p}_{h}^{n}v_{h}^{-})_{j+\frac{1}{2}} -(\widehat{p}_{h}^{n}v_{h}^{+})_{j-\frac{1}{2}}\right) =(f^{n}, v_{h}), \\&(p_{h}^{n}, w_{h})+\big (u_{h}^{n}, (w_{h})_{x}\big )-\sum _{j=1}^{N} \left( (\widehat{u}_{h}^{n}w_{h}^{-})_{j+\frac{1}{2}} -(\widehat{u}_{h}^{n}w_{h}^{+})_{j-\frac{1}{2}}\right) =0. \end{aligned}\right. \end{aligned}$$
(60)

Here, \(\Lambda _{\log }^{\alpha }u_{h}^{n}\) denotes nonuniform L1 approximation for the Caputo-Hadamard time-fractional derivative \(_{\text{CH}}\mathrm{{D}}^{\alpha }_{a,t}u^{n}\) defined in (18). The numerical fluxes are still chosen as (44).

Theorem 3

(Numerical stability) The solution \(u_h^n\) of the nonuniform L1/LDG scheme (60) satisfies

$$\begin{aligned} \Vert u_h^n\Vert \leqslant 2\left( \Vert u_h^0\Vert +\frac{(\log t_n-\log a)^\alpha }{\Gamma (1+\alpha )}\max _{0\leqslant j\leqslant n}\Vert f^j\Vert \right) . \end{aligned}$$
(61)

Proof

Taking the test functions in (60) as \(v_h=u_h^{n}\), \(w_h=p_h^n\), and integrating by parts, we have

$$\begin{aligned} (\Lambda _{\log }^{\alpha }u_{h}^{n}, u_{h}^{n}) +||p_{h}^{n}||^{2}=(f^{n}, u_{h}^{n}). \end{aligned}$$
(62)

Employing the definitions of \(\Lambda _{\log }^{\alpha }\) and Lemma 5, it is easy to see that

$$\begin{aligned} \begin{aligned} (\Lambda _{\log }^{\alpha }u_{h}^{n}, u_{h}^{n})&=\frac{b_{n,1}}{\Gamma (2-\alpha )}(u_h^n,u_h^n)+\frac{1}{\Gamma (2-\alpha )}\sum _{i=1}^{n-1}(b_{n,i+1}-b_{n,i})(u_h^{n-i},u_h^n)\\&\quad -\frac{b_{n,n}}{\Gamma (2-\alpha )}(u_h^0,u_h^n)\\&\geqslant \frac{b_{n,1}}{\Gamma (2-\alpha )}\Vert u_h^n\Vert \Vert u_h^n\Vert +\frac{1}{\Gamma (2-\alpha )}\sum _{i=1}^{n-1}(b_{n,i+1}-b_{n,i})\Vert u_h^{n-i}\Vert \Vert u_h^n\Vert \\&\quad -\frac{b_{n,n}}{\Gamma (2-\alpha )}\Vert u_h^0\Vert \Vert u_h^n\Vert \\&=(\Lambda _{\log }^{\alpha }\Vert u_h^n\Vert )\Vert u_h^n\Vert . \end{aligned} \end{aligned}$$
(63)

Substituting the above conclusions into (62) and using the Cauchy-Schwarz inequality yield

$$\begin{aligned} (\Lambda _{\log }^{\alpha }\Vert u_h^n\Vert )\Vert u_h^n\Vert \leqslant \Vert f^n\Vert \Vert u_h^n\Vert . \end{aligned}$$
(64)

Combine (64) and the Gronwall Inequality II (see Lemma 11) to obtain

$$\begin{aligned} \Vert u_h^n\Vert \leqslant \Vert u_h^0\Vert +\tau _n^\alpha \Gamma (2-\alpha )\sum _{j=1}^n\theta _{n,j}\Vert f^j\Vert . \end{aligned}$$
(65)

Then, by Corollary 1, one has

$$\begin{aligned} \Vert u_h^n\Vert \leqslant \Vert u_h^0\Vert +\frac{(\log t_n-\log a)^\alpha }{\Gamma (1+\alpha )}\max _{1\leqslant j\leqslant n}\Vert f^j\Vert . \end{aligned}$$

This ends the proof.

Theorem 4

(Error estimate) Let \(u(x,t_n)\) be the exact solution of (3) and \(u_h^n\) be the numerical solution of the nonuniform L1/LDG scheme (60), respectively. Then, for that u(xt) satisfies the regularity assumption (59) and for \(r\leqslant 2(2-\alpha )/\alpha \) and \(M\geqslant 8\), the following error estimate holds:

$$\begin{aligned} \Vert u^{n}-u_{h}^{n}\Vert \leqslant C(h^{k+1}+M^{-\min \{r\alpha , 2-\alpha \}}),\, 1\leqslant n\leqslant M, \end{aligned}$$
(66)

where the bounding constant C is independent of M and h.

Proof

Similar to the uniform case, we denote

$$\begin{aligned}{} & {} e_{u}^{n}=u^{n}-u_{h}^{n}=(u^{n}-\mathscr {P}_{h}^{-}u^{n}) +(\mathscr {P}_{h}^{-}u^{n}-u_{h}^{n}){:}{=}\,\eta _{u}^{n}+\xi _{u}^{n}, \end{aligned}$$
(67)
$$\begin{aligned}{} & {} e_{p}^{n}=p^{n}-p_{h}^{n}=(p^{n}-\mathscr {P}_{h}^{+}p^{n}) +(\mathscr {P}_{h}^{+}p^{n}-p_{h}^{n}){:}{=}\,\eta _{p}^{n}+\xi _{p}^{n}. \end{aligned}$$
(68)

Subtracting (60) from (42) gives us the error equation

$$\begin{aligned} \left\{ \begin{aligned}&(_{\text{CH}}\mathrm{{D}}^{\alpha }_{a,t}u^{n}-\Lambda _{\log }^{\alpha }u_{h}^{n}, v_{h}) +\big (e_{p}^{n}, (v_{h})_{x}\big ) -\sum _{j=1}^{N}\left( \big ((e_{p}^{n})^{+}v_{h}^{-}\big )_{j+\frac{1}{2}} -\big ((e_{p}^{n})^{+}v_{h}^{+}\big )_{j-\frac{1}{2}}\right) =0, \\&(e_{p}^{n}, w_{h})+\big (e_{u}^{n}, (w_{h})_{x}\big )-\sum _{j=1}^{N} \left( \big ((e_{u}^{n})^{-}w_{h}^{-}\big )_{j+\frac{1}{2}} -\big ((e_{u}^{n})^{-}w_{h}^{+}\big )_{j-\frac{1}{2}}\right) =0. \end{aligned}\right. \end{aligned}$$
(69)

Noticing the error decomposition (67) and (68), we have

$$\begin{aligned} \left\{ \begin{aligned}&(\Lambda _{\log }^{\alpha }\xi _{u}^{n}, v_{h}) +\big (\xi _{p}^{n}, (v_{h})_{x}\big ) -\sum _{j=1}^{N}\left( \big ((\xi _{p}^{n})^{+}v_{h}^{-}\big )_{j+\frac{1}{2}} -\big ((\xi _{p}^{n})^{+}v_{h}^{+}\big )_{j-\frac{1}{2}}\right) \\& = \ (\Upsilon ^{n}, v_{h})-(\Lambda _{\log }^{\alpha }\eta _{u}^{n}, v_{h}) -\big (\eta _{p}^{n}, (v_{h})_{x}\big ) +\sum _{j=1}^{N}\left( \big ((\eta _{p}^{n})^{+}v_{h}^{-}\big )_{j+\frac{1}{2}} -\big ((\eta _{p}^{n})^{+}v_{h}^{+}\big )_{j-\frac{1}{2}}\right) , \\&(\xi _{p}^{n}, w_{h})+\big (\xi _{u}^{n}, (w_{h})_{x}\big )-\sum _{j=1}^{N} \left( \big ((\xi _{u}^{n})^{-}w_{h}^{-}\big )_{j+\frac{1}{2}} -\big ((\xi _{u}^{n})^{-}w_{h}^{+}\big )_{j-\frac{1}{2}}\right) \\&= \ -(\eta _{p}^{n}, w_{h})-\big (\eta _{u}^{n}, (w_{h})_{x}\big )+\sum _{j=1}^{N} \left( \big ((\eta _{u}^{n})^{-}w_{h}^{-}\big )_{j+\frac{1}{2}} -\big ((\eta _{u}^{n})^{-}w_{h}^{+}\big )_{j-\frac{1}{2}}\right) , \end{aligned}\right. \end{aligned}$$
(70)

where \(\Upsilon ^n\) is the truncation error defined in (18). Then, using the projection properties (37) and (38), we can get from (70) that

$$\begin{aligned} \left\{ \begin{aligned}&(\Lambda _{\log }^{\alpha }\xi _{u}^{n}, v_{h}) +\big (\xi _{p}^{n}, (v_{h})_{x}\big ) -\sum _{j=1}^{N}\left( \big ((\xi _{p}^{n})^{+}v_{h}^{-}\big )_{j+\frac{1}{2}} -\big ((\xi _{p}^{n})^{+}v_{h}^{+}\big )_{j-\frac{1}{2}}\right) \\& =(\Upsilon ^{n}, v_{h})-(\Lambda _{\log }^{\alpha }\eta _{u}^{n}, v_{h}), \\&(\xi _{p}^{n}, w_{h})+\big (\xi _{u}^{n}, (w_{h})_{x}\big )-\sum _{j=1}^{N} \left( \big ((\xi _{u}^{n})^{-}w_{h}^{-}\big )_{j+\frac{1}{2}} -\big ((\xi _{u}^{n})^{-}w_{h}^{+}\big )_{j-\frac{1}{2}}\right) \\& =-(\eta _{p}^{n}, w_{h}). \end{aligned}\right. \end{aligned}$$
(71)

Now, taking the test functions in (71) as \(v_h=\xi _{u}^{n}\), \(w_h=\xi _p^n\), and integrating by parts, we obtain

$$\begin{aligned} (\Lambda _{\log }^{\alpha }\xi _{u}^{n}, \xi _{u}^{n})+\Vert \xi _{p}^{n}\Vert ^{2} =(\Upsilon ^{n}, \xi _{u}^{n})-(\Lambda _{\log }^{\alpha }\eta _{u}^{n}, \xi _{u}^{n}) -(\eta _{p}^{n}, \xi _{p}^{n}). \end{aligned}$$
(72)

From (63), we know that \((\Lambda _{\log }^{\alpha }\xi _{u}^{n}, \xi _{u}^{n})\) satisfies

$$\begin{aligned} (\Lambda _{\log }^{\alpha }\xi _{u}^{n}, \xi _{u}^{n})\geqslant (\Lambda _{\log }^\alpha \Vert \xi _u^n\Vert )\Vert \xi _u^n\Vert . \end{aligned}$$
(73)

Substituting (73) into (72) and using the interpolation property (39), we have

$$\begin{aligned} (\Lambda _{\log }^\alpha \Vert \xi _u^n\Vert )\Vert \xi _u^n\Vert \leqslant \left( \Vert \Upsilon ^n\Vert +\Vert \Lambda _{\log }^{\alpha }\eta _{u}^{n}\Vert \right) \Vert \xi _u^n\Vert +Ch^{2k+2}. \end{aligned}$$
(74)

Then, applying the Gronwall Inequality II (see Lemma 11), we get

$$\begin{aligned} \Vert \xi _u^n\Vert \leqslant \Vert \xi _u^0\Vert +\tau _n^\alpha \Gamma (2-\alpha )\sum _{j=1}^n\theta _{n,j} \left( \Vert \Upsilon ^j\Vert +\Vert \Lambda _{\log }^{\alpha }\eta _{u}^{j}\Vert +Ch^{k+1}\right) +Ch^{k+1}. \end{aligned}$$
(75)

It follows from Corollary 1 that

$$\begin{aligned} \tau _n^\alpha \Gamma (2-\alpha )\sum _{j=1}^n\theta _{n,j}(Ch^{k+1})\leqslant \frac{Ch^{k+1}(\log t_n-\log a)^\alpha }{\Gamma (1+\alpha )}. \end{aligned}$$
(76)

From Lemma 6 and Corollary 2, when \(r\leqslant 2(2-\alpha )\) and \(M\geqslant 8\), there holds

$$\begin{aligned} \begin{aligned} \tau _n^\alpha \Gamma (2-\alpha )\sum _{j=1}^n\theta _{n,j}\Vert \Upsilon ^j\Vert&\leqslant \frac{{\text{e}}^r\Gamma (1+l_M-\min \{2-\alpha ,r\alpha \}/r)}{\Gamma (1+l_M+\alpha -\min \{2-\alpha ,r\alpha \}/r)\Gamma (2-\alpha )}\left( \log \frac{T}{a}\right) \\&\quad \times \left( \frac{\log t_n-\log a}{\log T-\log a}\right) ^{l_M+\alpha -\min \{2-\alpha ,r\alpha \}/r}M^{-\min \{2-\alpha ,r\alpha \}}. \end{aligned} \end{aligned}$$
(77)

Collecting up the above conclusions into (75) and noting that \(\xi _u^0=0\), we can obtain

$$\begin{aligned} \Vert \xi _u^n\Vert \leqslant C\left( h^{k+1}+M^{-\min \{2-\alpha ,r\alpha \}}\right) . \end{aligned}$$
(78)

Finally, by using the triangle inequality and the interpolation property (39) again, we can finish the proof of Theorem 4.

Remark 2

When \(\alpha \rightarrow 1^-\) (i.e., \(\lim _{\alpha \rightarrow 1^-} \,_{\text{CH}}\mathrm{{D}}^{\alpha }_{a,t}u(x,t)=t\frac{\partial u(x, t)}{\partial t}\)), the coefficients on the right-hand sides of (61) and (66) are finite, which means that Theorems 3 and 4 still remain valid for \(\alpha \rightarrow 1^-\). Such a fact shows the consistency with \(\alpha \)=1.

4 Numerical Experiments

In this section, several numerical experiments are given to show that the error estimates obtained in Theorems 2 and 4 are valid. The temporal and spatial convergence orders in the \(L^2\)-norm sense are given by

$$\begin{aligned} \left\{ \begin{aligned}&\textrm{Order}_M=\frac{\log (E(M_1)/E(M_2))}{\log (M_1/M_2)} \;\;\;\mathrm{in\;time}, \\&\textrm{Order}_N=\frac{\log (E(N_1)/E(N_2))}{\log (N_1/N_2)}\;\;\;\mathrm{in\; space}. \end{aligned}\right. \end{aligned}$$

Example 1

Consider the following Caputo-Hadamard fractional sub-diffusion equation:

$$\begin{aligned} \left\{ \begin{aligned}&_{\text{CH}}\mathrm{{D}}^{\alpha }_{1,t}u(x,t)-\Delta u(x,t)=f(x,t), \,(x,t)\in (0,1)\times (1,2], \\&u(x,1)=0,\, x\in (0,1), \\&u(0,t)=u(1,t)=0,\, t\in (1,2], \end{aligned}\right. \end{aligned}$$
(79)

where the source term is given by

$$\begin{aligned} f(x,t)=\frac{2}{\Gamma (3-\alpha )}(\log t)^{2-\alpha }\sin (2\uppi x) +4\uppi ^{2}(\log t)^{2}\sin (2\uppi x),\,0<\alpha <1. \end{aligned}$$

The exact solution of the above equation is \(u(x,t)=(\log t)^{2}\!\sin (2\uppi x)\).

In this experiment, we study the convergence of the uniform L1/LDG scheme (43). In Tables 1 and 2, we list the \(L^2\)-norm errors and convergence orders of the scheme (43) for problem (79) at time \(T=1\) with different values of \(\alpha \) (\(\alpha =0.5,\,0.7,\,0.9\)). We can clearly see from these two tables that the numerical convergence orders for \(u_h^n\) in the temporal and spatial directions are \((2-\alpha )\) and \((k+1)\), respectively, which is consistent with the theoretical prediction in Theorem 2.

Table 1 The \(L^2\)-norm errors and temporal convergence orders of Example 1 by using the scheme (43), \(N=1\,000\), \(T=1\)
Table 2 The \(L^2\)-norm errors and temporal convergence orders of Example 1 by using the scheme (43), \(M=1\,000\), \(k=1\), \(T=1\)

Example 2

Consider the following Caputo-Hadamard fractional sub-diffusion equation:

$$\begin{aligned} \left\{ \begin{aligned}&_{\text{CH}}\mathrm{{D}}^{\alpha }_{1,t}u(x,t)-\Delta u(x,t)=f(x,t), \,(x,t)\in (0,1)\times (1,2], \\&u(x,1)=0,\, x\in (0,1), \\&u(0,t)=u(1,t)=0,\, t\in (1,2], \end{aligned}\right. \end{aligned}$$
(80)

where the source term is given by

$$\begin{aligned} \begin{aligned} f(x,t)&=\left( \Gamma (\alpha +1) +\frac{2}{\Gamma (3-\alpha )}(\log t)^{2-\alpha }\right) \sin (2\uppi x)\\&\quad +\left( (\log t)^{\alpha }+(\log t)^{2}\right) 4\uppi ^{2}\sin (2\uppi x),\,0<\alpha <1. \end{aligned} \end{aligned}$$

The exact solution of the above equation is \(u(x,t)=\left( (\log t)^{\alpha }+(\log t)^{2}\right)\! \sin (2\uppi x)\).

Since the exact solution u(xt) of the problem (80) has low regularity at the initial time, we calculate this example by using the nonuniform L1/LDG scheme (60). Tables 3, 4, 5 display the \(L^2\)-norm errors and temporal convergence orders of this scheme at time \(T=1\) for different values of \(\alpha \) (\(\alpha =0.5,\,0.7,\,0.9\)). It is clear that for the Caputo-Hadamard fractional sub-diffusion equation with a low regularity solution u(xt) at the starting time, if the uniform L1 formula is used as time discretization, the scheme can not reach the optimal convergence order (i.e., \((2-\alpha )\)th-order) in time direction (see Table 3). In contrast, if we apply the nonuniform L1 formula to approximate the time fractional derivative, the optimal convergence orders can be observed (see Table 4). We also show the accuracy test in space of the scheme (60) for (80). The numerical results can be found in Table 5, which is consistent with the conclusion of Theorem 4.

Table 3 The \(L^2\)-norm errors and temporal convergence orders of Example 2 by using the scheme (60), \(M=N\), \(T=1\), \(r=1\)
Table 4 The \(L^2\)-norm errors and temporal convergence orders of Example 2 by using the scheme (60), \(M=N\), \(T=1\), \(r=(2-\alpha )/\alpha \)
Table 5 The \(L^2\)-norm errors and spatial convergence orders of Example 2 by using the scheme (60), \(M=1\,000\), \(T=1\), \(k=1\)

5 Conclusions

In the present paper, we establish two fundamental discrete Gronwall inequalities, allowing them to be used in the L1 discretization of the Caputo-Hadamard fractional partial differential equations. This opens a door to analyze the stability and the convergence of numerical methods for the Caputo-Hadamard evolution equations. Moreover, improved stability analysis and error estimates of uniform and nonuniform L1/LDG methods for the Caputo-Hadamard fractional sub-diffusion equation are given. These bounds are optimal in both spatial and temporal directions and they are still effective for \(\alpha \rightarrow 1^-\) (i.e., \(\lim _{\alpha \rightarrow 1^-}\,_{\text{CH}}\mathrm{{D}}^{\alpha }_{a,t}u(x,t)=t\frac{\partial u(x, t)}{\partial t}\)), which is consistent with the case \(\alpha =1\).

According to Lemma 3.1 [12], although \(\,_{\text{CH}}\mathrm{{D}}^{\alpha }_{a,t}u(x,t)\) in (3) for \(t\in [a, T]\) can be changed into \(\,_{\text{C}}\mathrm{{D}}^{\alpha }_{0,t'}\widetilde{u}_a(x,t')\) where \(\widetilde{u}_a(x,t')=u(x,t),t'\in [0, \log \frac{T}{a}]\) by using the variable transformation \(t'=\log \frac{t}{a}\), it is not necessary to change system (3) into a Caputo case. To say the least, it is not convenient for the multi-term problems. For example, if there is one more term \(u_t(x,t)\) on the left-hand side of the first equation of (3), by choosing the variable transformation \(t'=\log \frac{t}{a}\), then \(u_t(x,t)\) is changed into \(\frac{{\text{e}}^{-t'}}{a} \frac{\partial \widetilde{u}_a(x,t')}{\partial t'}\). This variable coefficient term may bring inconvenience and even difficulty especially for the case with the positive a but very close to 0 which leads to the coefficient \(\frac{{\text{e}}^{-t'}}{a}\) large enough. Therefore, the method and the technique presented in this paper are meaningful and useful.