Skip to main content
Log in

Robust Estimates in Balanced Norms for Singularly Perturbed Reaction Diffusion Equations Using Graded Meshes

  • Published:
Journal of Scientific Computing Aims and scope Submit manuscript

Abstract

The goal of this paper is to provide almost robust approximations of singularly perturbed reaction-diffusion equations in two dimensions by using finite elements on graded meshes. When the mesh grading parameter is appropriately chosen, we obtain quasioptimal error estimations in a balanced norm for piecewise bilinear elements, by using a weighted variational formulation of the problem introduced by N. Madden and M. Stynes, Calcolo 58(2) 2021. We also prove a supercloseness result, namely, that the difference between the finite element solution and the Lagrange interpolation of the exact solution, in the weighted balanced norm, is of higher order than the error itself. We finish the work with numerical examples which show the good performance of our approach.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Data Availability

Data sharing not applicable to this article as no datasets were generated or analysed during the current study.

References

  1. Adler, J., MacLachlan, S., Madden, N.: A first-order system Petrov–Galerkin discretization for a reaction-diffusion problem on a fitted mesh. IMA J. Numer. Anal. 36(3), 1281–1309 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  2. Adler, J., MacLachlan, S., Madden, N.: First-order system least squares finite-elements for singularly perturbed reaction-diffusion equations. Large-Scale Sci. Comput. 11958, 3–14 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  3. Apel, T.: Anisotropic finite elements: Local estimates and applications. Leipzig: Teubner; Chemnitz: Technische Univ. (1999)

  4. Durán, R.G., Lombardi, A.L.: Error estimates on anisotropic \({\cal{Q} }_1\) elements for functions in weighted Sobolev spaces. Math. Comput. 74(252), 1679–1706 (2005). https://doi.org/10.1090/S0025-5718-05-01732-1

    Article  MATH  Google Scholar 

  5. Durán, R.G., Lombardi, A.L., Prieto, M.I.: Superconvergence for finite element approximation of a convection-diffusion equation using graded meshes. IMA J. Numer. Anal. 32(2), 511–533 (2012). https://doi.org/10.1093/imanum/drr005

    Article  MathSciNet  MATH  Google Scholar 

  6. Durán, R.G., Lombardi, A.L., Prieto, M.I.: Supercloseness on graded meshes for \({\cal{Q} }_1\) finite element approximation of a reaction-diffusion equation. J. Comput. Appl. Math. 242, 232–247 (2013). https://doi.org/10.1016/j.cam.2012.10.004

    Article  MathSciNet  MATH  Google Scholar 

  7. Roos, H.G., Schopf, M.: Convergence and stability in balanced norms of finite element methods on Shishkin meshes for reaction-diffusion problems. ZAMM Z. Angew. Math. Mech. 95(6), 551–565 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  8. Gaucel, S., Langlais, M.: Some remarks on a singular reaction-diffusion system arising in predator-prey modeling. Discrete Contin. Dyn. Syst. Ser. B 8(1), 71–72 (2007)

    MathSciNet  MATH  Google Scholar 

  9. Kopteva, N.: Maximum norm a posteriori error estimate for a 2D singularly perturbed semilinear reaction-diffusion problem. SIAM J. Numer. Anal. 46(3), 1602–1618 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  10. Li, J., Navon, I.M.: Uniformly convergent finite element methods for singularly perturbed elliptic boundary value problems. I: Reaction-diffusion type. Comput. Math. Appl. 35(3), 57–70 (1998). https://doi.org/10.1016/S0898-1221(97)00279-4

    Article  MathSciNet  MATH  Google Scholar 

  11. Li, J., Wheeler, M.F.: Uniform convergence and superconvergence of mixed finite element methods on anisotropically refined grids. SIAM J. Numer. Anal. 38(3), 770–798 (2000). https://doi.org/10.1137/S0036142999351212

    Article  MathSciNet  MATH  Google Scholar 

  12. Lin, R., Stynes, M.: A balanced finite element method for singularly perturbed reaction-diffusion problems. SIAM J. Numer. Anal. 50(5), 2729–2743 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  13. Linß, T.: Layer-adapted meshes for reaction-convection-diffusion problems, Lect. Notes Math., vol. 1985. Springer, Berlin (2010). https://doi.org/10.1007/978-3-642-05134-0

    Book  MATH  Google Scholar 

  14. Liu, F., Madden, N., Stynes, M., Zhou, A.: A two-scale sparse grid method for a singularly perturbed reaction-diffusion problem in two dimensions. IMA J. Numer. Anal. 29(4), 986–1007 (2009). https://doi.org/10.1093/imanum/drn048

    Article  MathSciNet  MATH  Google Scholar 

  15. Lombardi, A.L.: Analysis of finite element methods for singularly perturbed problems. Ph.D. thesis, Universidad de Buenos Aires (2004). http://mate.dm.uba.ar/~rduran/theses/lombardi.pdf

  16. Madden, N., Stynes, M.: A weighted and balanced FEM for singularly perturbed reaction-diffusion problems. Calcolo 58(2), 1–16 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  17. Melenk, J.M., Xenophontos, C.: Robust exponential convergence of \(hp\)-FEM in balanced norms for singularly perturbed reaction-diffusion equations. Calcolo 53(1), 105–132 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  18. Mo, J., Zhou, K.: Singular perturbation for nonlinear species group reaction diffusion systems. J. Biomath. 21(4), 481–488 (2006)

    MathSciNet  MATH  Google Scholar 

  19. Pao, C.: Singular reaction diffusion equations of porous medium type. Nonlinear Anal. 71(5–6), 2033–2052 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  20. Rathgeber, F., Ham, D.A., Mitchell, L., Lange, M., Luporini, F., McRae, A.T.T., Bercea, G.T., Markall, G.R., Kelly, P.H.J.: Firedrake: automating the finite element method by composing abstractions. ACM Trans. Math. Softw. 43(3), 24:1-24:27 (2016). https://doi.org/10.1145/2998441. arXiv:1501.01809

    Article  MathSciNet  MATH  Google Scholar 

  21. Roos, H.G., Stynes, M., Tobiska, L.: Robust Numerical Methods for Singularly Perturbed Differential Equations. Convection-Diffusion-reaction and Flow Problems, Springer Ser. Comput. Math., vol. 24, 2nd edn. Springer, Berlin (2008). https://doi.org/10.1007/978-3-540-34467-4

  22. Zhang, Z.: Finite element superconvergence on Shishkin mesh for 2-D convection-diffusion problems. Math. Comput. 72(243), 1147–1177 (2003). https://doi.org/10.1090/S0025-5718-03-01486-8

    Article  MathSciNet  MATH  Google Scholar 

  23. Zlamal, M.: Superconvergence and reduced integration in the finite element method. Math. Comp. 32(143), 663–685 (1978)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Funding

This work was partially supported by Agencia Nacional de Promoción de la Investigación, el Desarrollo Tecnológico y la Innovación (Argentina) under Grant PICT 2018–3017. Additionally, the first author (M.G. Armentano) was supported by Universidad de Buenos Aires under Grant 20020170100056BA, and the second and third authors (A.L. Lombardi and C. Penessi) were supported by Universidad Nacional de Rosario under Grant 80020190100020UR.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ariel L. Lombardi.

Ethics declarations

Conflict of interest

The authors have no competing interests to declare that are relevant to the content of this article.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

In this section we present some technical results which have been used along the paper.

The following Lemma is a consequence of [14, Lemmata 1.1 and 1.2]. In addition to the compatibility conditions of Sect. 4 we assume here that the fourth order derivatives of f and b are Hölder continuous up to the boundary. It is also assumed that \(b(x,y)\ge 2b_0^2\).

Lemma 5

Let u be the solution of (1). Then for all \(x\in (0,\frac{3}{4})\times (0,\frac{3}{4})\) and \(k\le 2\), it holds

$$\begin{aligned} \left| \partial _x\partial _y^ku(x,y)\right|\le & {} C \left( 1+\varepsilon ^{1-k}\right) + \varepsilon ^{-1}e^{-b_0\frac{x}{\varepsilon }} + \varepsilon ^{-k}e^{-b_0\frac{y}{\varepsilon }} + \varepsilon ^{-1-k}e^{-b_0\frac{x+y}{\varepsilon }},\\ \left| \partial _y\partial _x^ku(x,y)\right|\le & {} C \left( 1+\varepsilon ^{1-k}\right) + \varepsilon ^{-k}e^{-b_0\frac{x}{\varepsilon }} + \varepsilon ^{-1}e^{-b_0\frac{y}{\varepsilon }} + \varepsilon ^{-1-k}e^{-b_0\frac{x+y}{\varepsilon }}. \end{aligned}$$

Similar estimates are valid on the subdomains \((0,\frac{3}{4})\times (\frac{1}{4},1)\) (replace y by \(1-y\)), \((\frac{1}{4},1)\times (0,\frac{3}{4})\) (replace x by \((1-x)\)) and \((\frac{1}{4},1)\times (\frac{1}{4},1)\) replace (x by \(1-x\) and y by \(1-y\)).

This Lemma allows us to obtain the next result.

Lemma 6

Let u be the solution of (1). Then, under Assumption 1, we have that there exists a constant C such that

$$\begin{aligned}{} & {} \varepsilon \left[ \sum _{i,j} \beta _{min}\left( h_i^2\Vert \partial _x^3u\Vert _{0,R_{ij}} + h_ih_j\Vert \partial _x^2\partial _yu\Vert _{0,R_{ij}} + h_j^2\Vert \partial _x\partial _y^2u\Vert _{0,R_{ij}}\right) ^2\right] ^\frac{1}{2}\nonumber \\{} & {} \quad \le C \left( \log \frac{1}{\varepsilon }\right) ^\frac{1}{2}h^2. \end{aligned}$$
(33)

Proof

It is clear that by symmetry arguments it is enough to obtain (33) when the sum on the right hand side is restricted to the indices ij with \(R_{ij}\subseteq \varOmega _s:=[0,\frac{1}{2}]\times [0,\frac{1}{2}]\). Let us split \(\varOmega _s\) as indicated in Fig. 7. More precisely we set

$$\begin{aligned} \begin{array}{ll} \varLambda _0=\left( x_{{{\bar{m}}}},\frac{1}{2}\right) \times \left( x_{{{\bar{m}}}},\frac{1}{2}\right) , &{}\qquad \varLambda _1=\left( x_{{{\bar{m}}}},\frac{1}{2}\right) \times (x_1,x_{{{\bar{m}}}}),\\ \\ \varLambda _2=(x_1,x_{{{\bar{m}}}})\times \left( x_1,\frac{1}{2}\right) , &{}\qquad \varLambda _3=\left( x_1,\frac{1}{2}\right) \times (0,x_1), \\ \\ \varLambda _4=(0,x_1)\times \left( 0,\frac{1}{2}\right) , &{} \end{array} \end{aligned}$$

where \(x_{{{\bar{m}}}}\) is a grid point with \(x_{{{\bar{m}}}}=\gamma _0\varepsilon \log \frac{1}{\varepsilon }\).

We use the notation

$$\begin{aligned} A(\varLambda _k):= \varepsilon \left[ \sum _{i,j: R_{ij}\subset \varLambda _k} \beta _{min}\left( h_i^2\Vert \partial _x^3u\Vert _{0,R_{ij}} + h_ih_j\Vert \partial _x^2\partial _yu\Vert _{0,R_{ij}} + h_j^2\Vert \partial _x\partial _y^2u\Vert _{0,R_{ij}}\right) ^2\right] ^\frac{1}{2}. \end{aligned}$$

We will estimate separately \(A(\varLambda _k)\) for \(k=0,\ldots ,4\).

  1. 1.

    Since \(\gamma _0\ge \frac{2}{b_0}\) we have from Lemmata 3 and 5 that \(|D^3u(x,y)|\le C\varepsilon ^{-1}\) and being \(\gamma _0\ge \frac{1}{\gamma }\) we also have \(\beta _{min}\le |\beta (x,y)|\le C\) for all \((x,y)\in \varLambda _0\). Since \(h_i\le h\) for all i easily arrive at

    $$\begin{aligned} A(\varLambda _0)\le C h^2. \end{aligned}$$
  2. 2.

    On \(\varLambda _1\) we also have \(\beta \le C/\varepsilon \). Taking into account that the length of \(\varLambda _1\) in the y-direction is \(\le C \varepsilon \log \frac{1}{\varepsilon }\), \(h_i\le hx^{\alpha }\) for \((x,y)\in R_{ij}\subseteq \varLambda _1\), and using Lemma 3 we have

    $$\begin{aligned} \sum _{R_{ij}\subseteq \varLambda _1}\beta _{min}\left( h_i^2\Vert \partial _x^3u\Vert _{0,R_{ij}}\right) ^2\le C \varepsilon ^{-1}h^4\log \frac{1}{\varepsilon }. \end{aligned}$$
    (34)

    Now we again have into account the estimate

    $$\begin{aligned} \left| \partial _x^2\partial _yu(x,y)\right| \le C \left( 1+\varepsilon ^{-1}\right) + \varepsilon ^{-2}e^{-b_0\frac{x}{\varepsilon }} + \varepsilon ^{-1}e^{-b_0\frac{y}{\varepsilon }} + \varepsilon ^{-3}e^{-b_0\frac{x+y}{\varepsilon }}. \end{aligned}$$
    (35)

    With the previous arguments, and in addition using that \(\gamma _0\ge \frac{2}{b_0}\), we have \(\varepsilon ^{-2}e^{-b_0\frac{x}{\varepsilon }}\le C\) on \(\varLambda _1\), \(h_i, h_j\le h\), \(h_j\le Chy^\alpha \) for \((x,y)\in R_{ij}\subseteq \varLambda _1\). Thus we obtain

    $$\begin{aligned} \begin{aligned}&\sum _{R_{ij}\subset \varLambda _1}\beta _{min}\left( h_ih_j\Vert (1+\varepsilon ^{-1})\Vert _{0,R_{ij}}\right) ^2\le C h^4\varepsilon ^{-2}\log \frac{1}{\varepsilon },\\&\sum _{R_{ij}\subset \varLambda _1}\beta _{min}\left( h_ih_j\Vert \varepsilon ^{-2}e^{-b_0\frac{x}{\varepsilon }}\Vert _{0,R_{ij}}\right) ^2\le C h^4\log \frac{1}{\varepsilon },\\&\sum _{R_{ij}\subset \varLambda _1}\beta _{min}\left( h_ih_j\Vert \varepsilon ^{-1}e^{-b_0\frac{y}{\varepsilon }}\Vert _{0,R_{ij}}\right) ^2\le C h^4,\\&\sum _{R_{ij}\subset \varLambda _1}\beta _{min}\left( h_ih_j\Vert \varepsilon ^{-3}e^{-b_0\frac{(x+y)}{\varepsilon }}\Vert _{0,R_{ij}}\right) ^2\le C h^4. \end{aligned} \end{aligned}$$

    Then, together with (35) we arrive at

    $$\begin{aligned} \sum _{R_{ij}\subset \varLambda _1}\beta _{min}\left( h_ih_j\Vert \partial _x^2\partial _yu\Vert _{0,R_{ij}}\right) ^2\le C h^4\varepsilon ^{-2}\log \frac{1}{\varepsilon }. \end{aligned}$$
    (36)

    Now, from Lemma 5 we further have

    $$\begin{aligned} \left| \partial _x\partial _y^2u(x,y)\right| \le C \left( 1+\varepsilon ^{-1}\right) + \varepsilon ^{-1}e^{-b_0\frac{x}{\varepsilon }} + \varepsilon ^{-2}e^{-b_0\frac{y}{\varepsilon }} + \varepsilon ^{-3}e^{-b_0\frac{x+y}{\varepsilon }}. \end{aligned}$$
    (37)

    Now we use that \(h_j\le h\), \(|S_4| \le C \varepsilon \log \frac{1}{\varepsilon }\), \(\varepsilon ^{-2}e^{-b_0\frac{x}{\varepsilon }}\le C\) on \(\varLambda _1\) and \(h_j\le hy^\alpha \) for \((x,y)\in R_{ij}\subseteq \varLambda _1\) to obtain

    $$\begin{aligned} \begin{aligned}&\sum _{R_{ij}\subset \varLambda _1}\beta _{min}\left( h_j^2\Vert (1+\varepsilon ^{-1}) + \varepsilon ^{-1}e^{-b_0\frac{x}{\varepsilon }}\Vert _{0,R_{ij}}\right) ^2 \le C h^4\varepsilon ^{-2}\log \frac{1}{\varepsilon },\\&\sum _{R_{ij}\subset \varLambda _1}\beta _{min}\left( h_j^2\Vert \varepsilon ^{-2}e^{-b_0\frac{y}{\varepsilon }}\Vert _{0,R_{ij}}\right) ^2\le C h^4, \\&\sum _{R_{ij}\subset \varLambda _1}\beta _{min}\left( h_j^2\Vert \varepsilon ^{-3}e^{-b_0\frac{x+y}{\varepsilon }}\Vert _{0,R_{ij}}\right) ^2\le C h^4\varepsilon ^2, \end{aligned} \end{aligned}$$

    which joint with (37) gives

    $$\begin{aligned} \sum _{R_{ij}\subset \varLambda _1}\beta _{min}\left( h_j^2\Vert \partial _x\partial _y^2u\Vert _{0,R_{ij}}\right) ^2\le C h^4\varepsilon ^{-2}\log \frac{1}{\varepsilon }. \end{aligned}$$
    (38)

    Inequalities (34), (36) and (38) leave

    $$\begin{aligned} A(\varLambda _1)\le C \left( \log \frac{1}{\varepsilon }\right) ^\frac{1}{2}h^2. \end{aligned}$$
  3. 3.

    On \(\varLambda _2\) we use that \(\beta \le C/\varepsilon \). In order to estimate \(A(\varLambda _2)\) we first note that since \(h_i\le C hx^\alpha \) for \((x,y)\in R_{ij}\subseteq \varLambda _2\) we have from Lemma 3 with \(k=3\) that

    $$\begin{aligned} \sum _{R_{ij}\subset \varLambda _2}\beta _{min}h_i^4\Vert \partial _x^3u\Vert _{0,R_{ij}}^2\le C h^4\varepsilon ^{-2}. \end{aligned}$$
    (39)

    We use again (35) stated in Lemma 5. Using that for \(R_{ij}\subseteq \varLambda _2\) the inequalities \(h_i,h_j\le C h\), \(h_i\le hx^\alpha \) \(h_j\le hy^\alpha \) for \((x,y)\in R_{ij}\), \(h_i \le C h\varepsilon \log \frac{1}{\varepsilon }\) and \(|\varLambda _2|\le C \varepsilon \log \frac{1}{\varepsilon }\) hold true, it can be checked that

    $$\begin{aligned} \begin{aligned}&\sum _{R_{ij}\subset \varLambda _2} \beta _{min}h_i^2h_j^2\Vert (1+\varepsilon ^{-1})\Vert _{0,R_{ij}}^2 \le C \left( \log \frac{1}{\varepsilon }\right) ^3 h^4,\\&\sum _{R_{ij}\subset \varLambda _2}\beta _{min}h_i^2h_j^2\Vert \varepsilon ^{-2}e^{-b_0\frac{x}{\varepsilon }}\Vert _{0,R_{ij}}^2\le C \varepsilon ^{-2}h^4,\\&\sum _{R_{ij}\subset \varLambda _2}\beta _{min}h_i^2h_j^2\Vert \varepsilon ^{-1}e^{-b_0\frac{y}{\varepsilon }}\Vert _{0,R_{ij}}^2 \le C h^4,\\&\sum _{R_{ij}\subset \varLambda _2}\beta _{min}h_i^2h_j^2\Vert \varepsilon ^{-3}e^{-b_0\frac{(x+y)}{\varepsilon }}\Vert _{0,R_{ij}}^2 \le C \varepsilon ^{-1}h^4. \end{aligned} \end{aligned}$$

    Therefore we obtain

    $$\begin{aligned} \sum _{R_{ij}\subset \varLambda _2}\beta _{min}\left( h_ih_j\Vert \partial _x^2\partial _yu\Vert _{0,R_{ij}}\right) ^2\le C h^4\varepsilon ^{-2}. \end{aligned}$$
    (40)

    We use now the etimate (37). Then, using that for \(R_{ij}\subseteq \varLambda _2\) we have \(h_j\le h\) and \(h_j\le hy^\alpha \) for \((x,y)\in R_{ij}\) and since \(|\varLambda _2|\le C \varepsilon \log \frac{1}{\varepsilon }\) it follows

    $$\begin{aligned} \begin{aligned}&\sum _{R_{ij}\subset \varLambda _2}\beta _{min}\left( h_j^2\Vert (1+\varepsilon ^{-1}+\varepsilon ^{-1}e^{-b_0\frac{x}{\varepsilon }})\Vert _{0,R_{ij}}\right) ^2\le C h^4\varepsilon ^{-2}\log \frac{1}{\varepsilon },\\&\sum _{R_{ij}\subset \varLambda _2}\beta _{min}\left( h_j^2\Vert \varepsilon ^{-2}e^{-b_0\frac{y}{\varepsilon }}\Vert _{0,R_{ij}}\right) ^2\le C h^4,\\&\sum _{R_{ij}\subset \varLambda _2}\beta _{min}\left( h_j^2\Vert \varepsilon ^{-3}e^{-b_0\frac{x+y}{\varepsilon }}\Vert _{0,R_{ij}}\right) ^2 \le C \varepsilon ^{-2} h^4. \end{aligned} \end{aligned}$$

    It follows that

    $$\begin{aligned} \sum _{R_{ij}\subset S_1}\beta _{min}\left( h_j^2\Vert \partial _x\partial _y^2u\Vert _{0,R_{ij}}\right) ^2\le C h^4\varepsilon ^{-2}\log \frac{1}{\varepsilon }. \end{aligned}$$
    (41)

    Collecting (39)–(41) we find

    $$\begin{aligned} A(\varLambda _2)\le C \left( \log \frac{1}{\varepsilon }\right) ^\frac{1}{2}h^2. \end{aligned}$$
  4. 4.

    We consider the estimate on \(\varLambda _3\). We note that \(R_{11}\) is exterior to \(\varLambda _3\) and then we have \(h_i\le hx^\alpha \) for all \((x,y)\in R_{i1}\subseteq \varLambda _3\). Since \(h \le e^{-1}\) we have

    $$\begin{aligned} h_1=h^{2\log \frac{1}{\varepsilon }}=\varepsilon ^{2\log \frac{1}{h}}<\varepsilon ^2 \end{aligned}$$

    and then we also have \(|\varLambda _3|\le C\varepsilon ^2\). We will also use that \(\beta \le \frac{C}{\varepsilon }\) on \(\varLambda _3\). Then, from the estimate for \(\partial ^3_xu\) from Lemma 3 we have

    $$\begin{aligned}{} & {} \sum _{R_{i1}\subset \varLambda _3}\beta _{min}\left( h_i^2\Vert _{0,R_{i1}}\partial _x^3u\Vert \right) ^2\nonumber \\{} & {} \quad \le C h^4\int _0^{h_1}\int _0^1\left( 1+\varepsilon ^{-3}x^{2\alpha }e^{-b_0\frac{x}{\varepsilon }}\right) ^2\,dx\,dy\le C h^4. \end{aligned}$$
    (42)

    Now we again take into account the estimate (35). Following the previous argument and since \(h_i\le h\), \(h_1\le h\varepsilon \) we have

    $$\begin{aligned} \begin{aligned}&\sum _{R_{i1}\subset \varLambda _3}\beta _{min}\left( h_ih_1\Vert 1+\varepsilon ^{-1} + \varepsilon ^{-1}e^{-b_0\frac{y}{\varepsilon }}\Vert _{0,R_{i1}}\right) ^2\le C h^4\varepsilon ,\\&\sum _{R_{i1}\subset \varLambda _3}\beta _{min}\left( h_ih_1\Vert \varepsilon ^{-2}e^{-b_0\frac{x}{\varepsilon }}\Vert _{0,R_{i1}}\right) ^2\le C h^4\varepsilon ^2, \end{aligned} \end{aligned}$$

    and since \(h_i\le hx^\alpha \) for all \((x,y)\in R_{i1}\subseteq \varLambda _3\) we also have

    $$\begin{aligned} \sum _{R_{i1}\subset \varLambda _3}\beta _{min}\left( h_ih_1\Vert \varepsilon ^{-3}e^{-b_0\frac{x+y}{\varepsilon }}\Vert \right) ^2\le C h^4. \end{aligned}$$

    Thus we arrive at

    $$\begin{aligned} \sum _{R_{i1}\subset \varLambda _3}\beta _{min}\left( h_ih_1\Vert \partial _x^2\partial _yu\Vert _{0,R_{i1}}\right) ^2\le h^4. \end{aligned}$$
    (43)

    On the other hand, we now use the estimate (37).

    Since again \(h_1\le h\varepsilon \) we obtain

    $$\begin{aligned} \begin{aligned}&\sum _{R_{i1}\subset \varLambda _3}\beta _{min}\left( h_1^2\Vert 1+\varepsilon ^{-1} + \varepsilon ^{-1}e^{-b_0\frac{x}{\varepsilon }}\Vert _{0,R_{i1}}\right) ^2\le C h^4\varepsilon ^{3},\\&\sum _{R_{i1}\subset S_8}\beta _{min}\left( h_1^2\Vert \varepsilon ^{-2}e^{-b_0\frac{y}{\varepsilon }}\Vert _{0,R_{i1}}\right) ^2\le C h^4\varepsilon . \end{aligned} \end{aligned}$$

    With all the previous arguments we also can check that

    $$\begin{aligned} \sum _{R_{i1}\subset \varLambda _3}\beta _{min}\left( h_1^2\Vert \varepsilon ^{-3}e^{-b_0\frac{x+y}{\varepsilon }}\Vert \right) ^2\le C h^4\varepsilon ^{4}. \end{aligned}$$

    The last three inequalities give us

    $$\begin{aligned} \sum _{R_{i1}\subset \varLambda _3}\beta _{min}\left( h_1^2\Vert \partial _x\partial _y^2u\Vert _{0,R_{i1}}\right) ^2\le h^4\varepsilon . \end{aligned}$$
    (44)

    Finally, from (42)–(44) leave

    $$\begin{aligned} A(\varLambda _3)\le C h^2\varepsilon . \end{aligned}$$
  5. 5.

    Now, we consider the estimate on \(\varLambda _4\). We note that

    $$\begin{aligned} h_1 = h^{2\log \frac{1}{\varepsilon }} = h^{\log \frac{1}{\varepsilon }}h^{\log \frac{1}{\varepsilon }} = h^{\log \frac{1}{\varepsilon }}\varepsilon ^{\log \frac{1}{h}} \le h\varepsilon . \end{aligned}$$

    Furthermore, as we proved in the previous item, we also have \(h_1<\varepsilon ^2\), and as a consequence

    \(|\varLambda _4|\le \varepsilon ^2\). Then, we can simply use that \(\partial _x^3u\le C \varepsilon ^{-3}\), which follows from Lemma 3 to obtain

    $$\begin{aligned} \sum _{{\mathop {j\ne 1}\limits ^{R_{1j}\subset \varLambda _4}}} \beta _{min}\left( h_1^2\Vert \partial _x^3u\Vert _{0,R_{1j}}\right) ^2 \le C h^4\varepsilon ^{-1}. \end{aligned}$$
    (45)

    Now, take into account again (35)

    $$\begin{aligned} \left| \partial _x^2\partial _yu(x,y)\right| \le C \left( 1+\varepsilon ^{-1}\right) + \varepsilon ^{-2}e^{-b_0\frac{x}{\varepsilon }} + \varepsilon ^{-1}e^{-b_0\frac{y}{\varepsilon }} + \varepsilon ^{-3}e^{-b_0\frac{x+y}{\varepsilon }}. \end{aligned}$$

    We firstly note that, since \(h_j\le h\), we have

    $$\begin{aligned} \begin{aligned}&\sum _{{\mathop {j\ne 1}\limits ^{R_{1j}\subset \varLambda _4}}}\beta _{min}\left( h_1h_j\Vert (1+\varepsilon ^{-1}+\varepsilon ^{-1}e^{-b_0\frac{y}{\varepsilon }})\Vert _{0,R_{1j}}\right) ^2\le C h^4\varepsilon ,\\&\sum _{{\mathop {j\ne 1}\limits ^{R_{1j}\subset \varLambda _4}}}\beta _{min}\left( h_1h_j\Vert \varepsilon ^{-2}e^{-b_0\frac{x}{\varepsilon }}\Vert _{0,R_{1j}}\right) ^2\le C h^4\varepsilon ^{-1}. \end{aligned} \end{aligned}$$

    and secondly, since \(h_j\le hy^\alpha \) for all \((x,y)\in R_{1j}\subseteq \varLambda _4\), \(j\ne 1\) we have

    $$\begin{aligned} \sum _{{\mathop {j\ne 1}\limits ^{R_{1j}\subset \varLambda _4}}}\beta _{min}\left( h_1h_j\Vert \varepsilon ^{-3}e^{-b_0\frac{x+y}{\varepsilon }}\Vert _{0,R_{1j}}\right) ^2 \le C h^4. \end{aligned}$$

    From the last three inequalities we obtain

    $$\begin{aligned} \sum _{{\mathop {j\ne 1}\limits ^{R_{1j}\subset \varLambda _4}}} \beta _{min}\left( h_1h_j\Vert \partial _x^2\partial _yu\Vert _{0,R_{1j}}\right) \le C h^4\varepsilon ^{-1}. \end{aligned}$$
    (46)

    Now we use the estimate (37)

    $$\begin{aligned} \left| \partial _x\partial _y^2u(x,y)\right| \le C \left( 1+\varepsilon ^{-1}\right) + \varepsilon ^{-1}e^{-b_0\frac{x}{\varepsilon }} + \varepsilon ^{-2}e^{-b_0\frac{y}{\varepsilon }} + \varepsilon ^{-3}e^{-b_0\frac{x+y}{\varepsilon }}. \end{aligned}$$

    Since \(|\varLambda _4|\le \varepsilon ^2\) and \(h_j\le h\) we have

    $$\begin{aligned} \sum _{{\mathop {j\ne 1}\limits ^{R_{1j}\subset \varLambda _4}}}\beta _{min}\left( h_j^2\Vert (1+\varepsilon ^{-1}+\varepsilon ^{-1}e^{-b_0\frac{x}{\varepsilon }})\Vert _{0,R_{1j}}\right) ^2\le C h^4\varepsilon ^{-1}. \end{aligned}$$

    We also have

    $$\begin{aligned} \begin{aligned}&\sum _{{\mathop {j\ne 1}\limits ^{R_{1j}\subset \varLambda _4}}}\beta _{min}\left( h_j^2\Vert \varepsilon ^{-2}e^{-b_0\frac{y}{\varepsilon }}\Vert _{0,R_{1j}}\right) ^2\le C h^4\varepsilon ^2,\\&\sum _{{\mathop {j\ne 1}\limits ^{R_{1j}\subset \varLambda _4}}}\beta _{min}\left( h_j^2\Vert \varepsilon ^{-3}e^{-b_0\frac{x+y}{\varepsilon }}\Vert _{0,R_{1j}}\right) ^2\le C h^4, \end{aligned} \end{aligned}$$

    where we used again \(h_1\le \varepsilon ^2\) and \(h_j\le hy^\alpha \) for \((x,y)\in R_{1j}\subseteq \varLambda _4\), \(j\ne 1\). Then we obtain

    $$\begin{aligned} \sum _{{\mathop {j\ne 1}\limits ^{R_{1j}\subset \varLambda _4}}} \beta _{min}\left( h_j^2\Vert \partial _x\partial _y^2u\Vert _{0,R_{1j}}\right) \le C h^4\varepsilon ^{-1}. \end{aligned}$$
    (47)

    Finally, since

    $$\begin{aligned} |\partial _x^3u|, |\partial _x^2\partial _yu|, |\partial _x\partial _y^2u| \le C \varepsilon ^{-3} \end{aligned}$$

    and using \(h_1\le h\varepsilon \) and \(h_1\le \varepsilon ^2\), and so \(|R_{11}|\le \varepsilon ^4\), we obtain

    $$\begin{aligned} \beta _{min}h_1^4\left( \Vert \partial _x^3u\Vert _{0,R_{11}} + \Vert \partial _x^2\partial _yu\Vert _{0,R_{11}} + \Vert \partial _x\partial _y^2\Vert _{0,R_{11}}\right) ^2 \le C h^4\varepsilon . \end{aligned}$$
    (48)

    Therefore, inequalities (45)–(48) leave

    $$\begin{aligned} A(\varLambda _4)\le C h^2\varepsilon ^\frac{1}{2}. \end{aligned}$$

In this way we obtain (33) when the indices ij are restricted to the ones for which \(R_{ij}\subset \varOmega _s\). The proof concludes by symmetry arguments.\(\square \)

Fig. 7
figure 7

Split of \(\varOmega _s=[0,\frac{1}{2}]^2\) used in the proof of Lemma 6

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Armentano, M.G., Lombardi, A.L. & Penessi, C. Robust Estimates in Balanced Norms for Singularly Perturbed Reaction Diffusion Equations Using Graded Meshes. J Sci Comput 96, 18 (2023). https://doi.org/10.1007/s10915-023-02245-y

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10915-023-02245-y

Keywords

Mathematics Subject Classification

Navigation