1 Introduction

The Khasminskii approach, developed by Russian mathematician Mark Khasminskii in the 1960s, is a mathematical technique utilized to investigate the characteristics of stochastic processes. This method focuses on analyzing the behavior of conditional expectations to gain insights into the stochastic process under consideration. By doing so, it enables the establishment of long-term bounds on the process’s behavior. The versatility of the Khasminskii approach is evident in its applications to diverse stochastic analysis problems, encompassing areas such as stochastic differential equations, optimal control, and filtering. Additionally, it has proven valuable in comprehending the behavior of random walks, queuing systems, and other stochastic processes. Notably, the Khasminskii approach excels in managing systems involving a large number of interacting particles, making it particularly advantageous for studying complex systems in physics, chemistry, and biology. Consequently, the Khasminskii approach has made significant contributions to the field of stochastic analysis and remains a crucial tool for understanding the behavior of stochastic processes [1]. Khasminiskii was interested in studying the convergence of idle systems on the drag time scale \(\varepsilon \rightarrow 0,\) in resolving intermediate arguments. He concluded that averaging principle lay in the study of equations lost in terms of the relevant average. Therefore, we get an easy way to solve these equations.

Brownian motion, named after the botanist Robert Brown who first observed it in 1827, is the random movement of particles suspended in a fluid or gas, resulting from collisions with the molecules of the surrounding medium. Brownian motion is a fundamental concept in the study of stochastic processes and has applications in physics, chemistry, finance, and many other fields see [2, 3].

Stochastic fractional differential equations (SFDEs) are a type of differential equation that combine both stochastic (random) and fractional calculus concepts. On the other hand, stochastic calculus deals with random processes and their properties, and it has found applications in many areas including finance, economics, physics, and engineering. SFDEs combine both of these concepts by considering a stochastic process whose evolution is described by a fractional differential equation. These equations have been used to model many complex phenomena, including turbulence, anomalous diffusion, and financial markets [4,5,6,7,8].

In recent years, several authors started to work on fractional operators with more general kernels. The connecyed fractional derivatives are either in Riemann–Liouville or Caputo sense. For example, some authors called those in the Caputo sense by the \(\psi \)-Caputo derivatives [9,10,11]. In this work, we investigate Pantograph equations in the frame of such generalized fractional derivatives. For the sake of completeness and the benefit of readers we refer to [12,13,14] and the cited works therein.

In fact, our work is an important generalization of what is obtained in the reference see [15]

The nature of solutions for fractional stochastic differential pantograph equations (FSDPEs) with the \(\psi \)-Caputo sense in Euclidean n-dimentional spaces, \( {\mathbb {R}} ^{n}\) [14, 16, 17], is of hight interest in many applications. In general, such systems can take the form

$$\begin{aligned} \left\{ \begin{array}{llll} {\mathcal {D}}_{\varsigma }^{\alpha ;\psi }X(\varsigma )=\vartheta _{1}(\varsigma ,X(\varsigma ),X(a+\lambda \varsigma ))\\ \qquad +\vartheta _{2}(\varsigma ,X(\varsigma ),X(a+\lambda \varsigma ))\psi ^{\prime }\left( \varsigma \right) dB\left( \varsigma \right) ,\text { }\varsigma \in \left[ a,{\mathcal {T}}\right] ,\\ X(a)=X_{0}, \end{array} \right. \end{aligned}$$
(1.1)

where \(\lambda \in \left( 0,\frac{{\mathcal {T}}-a}{{\mathcal {T}}}\right) \), \({\mathcal {D}}_{\varsigma }^{\mathfrak {\alpha };\psi }\) is the \(\psi \)-Caputo fractional derivative, \(\alpha \in (\frac{1}{2},1),\) for each \(\varsigma \ge a\), \(\vartheta _{1}:\left[ a,{\mathcal {T}}\right] \times {\mathbb {R}} ^{n}\rightarrow {\mathbb {R}} ^{n}\) and \(\vartheta _{2}:\left[ a,{\mathcal {T}}\right] \times {\mathbb {R}} ^{n}\rightarrow {\mathbb {R}} ^{n\times m}\) are mesurable continuous functions (CFs), \(B(\varsigma )\) is a m-dimensional standard Brownian motion on the complete probability space \(\left\{ \Omega ,{\mathbb {F}},P\right\} \). The initial value \(X_{0}\) is "second order random variable", it also an \({\mathbb {F}}_{0} \)-mesurable \( {\mathbb {R}} ^{n}\)-value random variable, satisfying \(E \left| X_{0}\right| ^{2}<\infty .\) Solutions of non-linear FSDPEs are almost impossible to solve and very difficult. For this reason, we use symmetrical methods and techniques in the widest field. That plays an important role in the evolution of partial calculus [16, 18, 19] of which the averaging principle is its theoretical support.

Pantograph equations are a class of differential equations that were first introduced by the German mathematician Heinrich Weber in 1845. The name "pantograph" comes from the fact that the equations resemble the mechanism of a pantograph, which is a device used for copying drawings and diagrams at a different scale.

The pantograph equation has the form: \(y^{\prime }(t)=a(t)y(t)+b(t)y(qt),\) Where y(t) is the unknown function, a(t) and b(t) are given functions, and q is a positive constant. Pantograph equations have applications in various areas of science and engineering, such as electrical engineering, signal processing, and control theory. In particular, they are used to model systems with time delays, which arise in many real-world applications [20,21,22,23,24,25].

In our work, we expanded the classical Khasminskii appraoch to fractional differential equations using the \(\psi \)-Caputo fractional derivative. Our manifestation is that mild solutions of tow systems before and after averaging are equivalent in the sense of mean square. Which clearly and rigorously proves the principle of fractional average. In other words, a simple and effective method is given to solve random differential pantograph Eq. (1.1) inaccurately.

We have organized the article into four primary sections. The first section covers general definitions of fractional calculus and its properties, along with a description of the specific equation type under consideration. Moving on to the second section, we introduce approximation methods and provide definitions for the generalized Caputo derivative, as well as conditionally including the classic Khasminskii theorem. In the third section, we present the key results that have been achieved. Finally, we provide a practical example that illustrates the outcomes of our research.

2 Preliminaries

This section includes some basic techniques, definitions, lemmas and theories that we need to proceed. For more details see [9,10,11, 17, 26,27,28,29].

Let \(\psi :[a_{1},a_{2}]\rightarrow {\mathbb {R}}\) be increasing via \(\psi ^{\prime }(\varsigma )\ne 0,\) \(\forall \varsigma .\) The symbol \(C(J,{\mathbb {R}})\) represents the Banach space of CFs \(\varkappa :J\rightarrow {\mathbb {R}}\) with the norm \(\Vert \varkappa \Vert =\sup \{|\varkappa (\varsigma )|:\varsigma \in J\}.\)

Definition 2.1

[17, 29] The Riemann–Liouville fractional integral of order \(\alpha >0\) for a function \(x:\left[ 0,+\infty \right) \rightarrow {\mathbb {R}} \) is defined as

$$\begin{aligned} I^{\alpha }x(\varsigma )=\frac{1}{\Gamma (\alpha )}\int _{0}^{\varsigma } (\varsigma -s)^{\alpha -1}x(s)ds, \end{aligned}$$

where \(\Gamma \) is the Euler gamma function given by

$$\begin{aligned} \Gamma (\alpha )=\int _{0}^{\infty }e^{-\varsigma }\varsigma ^{\alpha -1}d\varsigma . \end{aligned}$$

Definition 2.2

[17, 29] The Riemann–Liouville fractional derivative of order \(\alpha >0\) for a function \(x:\left[ 0,+\infty \right) \rightarrow {\mathbb {R}} \) is defined as

$$\begin{aligned} D^{\alpha }x(\varsigma )=\frac{1}{\Gamma (n-\alpha )}\int _{0}^{\varsigma }\left( \varsigma -s\right) ^{n-\alpha -1}x^{\left( n\right) }\left( s\right) ds, \alpha \in \left( n-1,n\right) ,\text { }n\in {\mathbb {N}}. \end{aligned}$$

Definition 2.3

[17, 18, 29] The \(\psi \)-Riemann–Liouville fractional integral (\(\psi \)-RLFI) of order \(\alpha >0\) for a CF \(\varkappa :\left[ a,{\mathcal {T}}\right] \rightarrow {\mathbb {R}} \) is referred to as

$$\begin{aligned} {\mathcal {I}}_{a}^{\alpha ;\psi }\varkappa (\varsigma )=\int _{a}^{\varsigma } \frac{(\psi \left( \varsigma \right) -\psi \left( s\right) )^{\alpha -1} }{\Gamma (\alpha )}\psi ^{\prime }\left( s\right) \varkappa (s)ds. \end{aligned}$$

Definition 2.4

[17, 18, 29] The Caputo fractional derivative (CFD) of order \(\alpha >0\) for a \(\varkappa :\left[ 0,+\infty \right) \rightarrow {\mathbb {R}} \) is intended by

$$\begin{aligned} D^{\alpha }\varkappa \left( \varsigma \right) =\frac{1}{\Gamma (n-\alpha )} \int _{0}^{\varsigma }\left( \varsigma -s\right) ^{n-\alpha -1}\varkappa ^{\left( n\right) }\left( s\right) ds, \alpha \in \left( n-1,n\right) ,\text { }n\in {\mathbb {N}}. \end{aligned}$$

Definition 2.5

[9, 17, 18] The \(\psi \)-Caputo fractional derivative (\(\psi \)-CFD) of order \(\alpha >0\) for a CF \(\varkappa :\left[ a,{\mathcal {T}}\right] \rightarrow {\mathbb {R}} \) is the aim of

$$\begin{aligned} {\mathcal {D}}_{a}^{\alpha ;\psi }\varkappa (\varsigma )=\int _{a}^{\varsigma } \frac{(\psi \left( \varsigma \right) -\psi \left( s\right) )^{n-\alpha -1} }{\Gamma (n-\alpha )}\psi ^{\prime }\left( s\right) \partial _{\psi }^{n} \varkappa (s)ds,\ \varsigma >a, \alpha \in \left( n-1,n\right) , \end{aligned}$$

where \(\partial _{\psi }^{n}=\left( \frac{1}{\psi ^{\prime }(\varsigma )}\frac{d}{d\varsigma }\right) ^{n},n\in {\mathbb {N}}\).

Lemma 2.6

[9, 17] Let \(q,\ell >0\), and \(\varkappa \in C([a,b],{\mathbb {R}})\). Then \(\forall \varsigma \in [a,b]\) and by assuming \(F_{a}(\varsigma )=\psi (\varsigma )-\psi (a)\), we have

1. \({\mathcal {I}}_{a}^{q;\psi }{\mathcal {I}}_{a}^{\ell ;\psi }\varkappa (\varsigma )={\mathcal {I}}_{a}^{q+\ell ;\psi }\varkappa (\varsigma ),\)

2. \({\mathcal {D}}_{a}^{q;\psi }{\mathcal {I}}_{a}^{q;\psi }\varkappa (\varsigma )=\varkappa (\varsigma ),\)

3. \(\displaystyle {\mathcal {I}}_{a}^{q;\psi }(F_{a}(\varsigma ))^{\ell -1} =\frac{\Gamma (\ell )}{\Gamma (\ell +q)}(F_{a}(\varsigma ))^{\ell +q-1},\)

4. \(\displaystyle {\mathcal {D}}_{a}^{q;\psi }(F_{a}(\varsigma ))^{\ell -1} =\frac{\Gamma (\ell )}{\Gamma (\ell -q)}(F_{a}(\varsigma ))^{\ell -q-1},\)

5. \({\mathcal {D}}_{a}^{q;\psi }(F_{a}(\varsigma ))^{k}=0,\ k\in \{0,\ldots ,n-1\},\ n\in {\mathbb {N}},\ q\in (n-1,n]\).

Lemma 2.7

[9, 14, 17] Let \(n-1<\alpha _{1}\le n,\alpha _{2}>0,\ a>0,\ \varkappa \in \mathcal { {\mathcal {L}} }(a,{\mathcal {T}}),\ {\mathcal {D}}_{a_{1}^{+}}^{\alpha _{1};\psi }\varkappa \in \mathcal { {\mathcal {L}} }(a,{\mathcal {T}})\). Then the differential equation

$$\begin{aligned} {\mathcal {D}}_{a}^{\alpha _{1};\psi }\varkappa =0 \end{aligned}$$

has the unique solution

$$\begin{aligned} \varkappa (\varsigma )=&w_{0}+w_{1}\left( \psi \left( \varsigma \right) -\psi \left( a\right) \right) +w_{2}\left( \psi \left( \varsigma \right) -\psi \left( a\right) \right) ^{2}\\&+\cdots +w_{n-1}\left( \psi \left( \varsigma \right) -\psi \left( a\right) \right) ^{n-1}, \end{aligned}$$

and

$$\begin{aligned} {\mathcal {I}}_{a}^{\alpha _{1};\psi }{\mathcal {D}}_{a}^{\alpha _{1};\psi } \varkappa \left( \varsigma \right) =&\varkappa \left( \varsigma \right) +w_{0}+w_{1}\left( \psi \left( \varsigma \right) -\psi \left( a\right) \right) +w_{2}\left( \psi \left( \varsigma \right) -\psi \left( a\right) \right) ^{2}\\&+\cdots +w_{n-1}\left( \psi \left( \varsigma \right) -\psi \left( a\right) \right) ^{n-1}, \end{aligned}$$

with \(w_{\ell }\in {\mathbb {R}},\ \ell =0,1,\ldots ,n-1\).

Furthermore,

$$\begin{aligned} {\mathcal {D}}_{a}^{\alpha _{1};\psi }{\mathcal {I}}_{a}^{\alpha _{1};\psi } \varkappa (\varsigma )=\varkappa (\varsigma ), \end{aligned}$$

and

$$\begin{aligned} {\mathcal {I}}_{a}^{\alpha _{1};\psi }{\mathcal {I}}_{a}^{\alpha _{2};\psi } \varkappa (\varsigma )={\mathcal {I}}_{a}^{\alpha _{2};\psi }{\mathcal {I}}_{a} ^{\alpha _{1};\psi }\varkappa (\varsigma )={\mathcal {I}}_{a}^{\alpha _{1}+\alpha _{2};\psi }\varkappa (\varsigma ). \end{aligned}$$

To process the qualitative properties of solving equation (1.1), we propose some conditions on the functions of the coefficient, which will prevent us from solving them.

\(\left( \Lambda 1\right) \) For any \(x,y,z,w\in {\mathbb {R}} ^{n}\) and \(\varsigma \in \left[ a,{\mathcal {T}}\right] \), there exist three positive constants \(C_{1},\) \(C_{2}\) and \(C_{3}\) such that

$$\begin{aligned}&\left| \vartheta _{1}(\varsigma ,x,y)\right| ^{2}\vee \left| \vartheta _{2}(\varsigma ,x,y)\right| ^{2} \le C_{1}^{2}\left( 1+\left| x\right| ^{2}+\left| y\right| ^{2}\right) \\&\quad \left| \vartheta _{1}\left( \varsigma ,x,y\right) -\vartheta _{1}\left( \varsigma ,w,z\right) \right| \vee \left| \vartheta _{2}\left( \varsigma ,x,y\right) -\vartheta _{2}\left( \varsigma ,w,z\right) \right| \le C_{2}\left| x-w\right| +C_{3}\left| y-z\right| \end{aligned}$$

where \(\left| .\right| \) is the norm of \( {\mathbb {R}} ^{n}\), \(x_{1}\vee x_{2}=\max \left\{ x_{1,}x_{2}\right\} .\)

By adopting the interest research of Zone [30], Zhang and Agarwal [31], note that by condition \(\left( \Lambda 1\right) \), FSDEs (1.1) have a unique solution

$$\begin{aligned} X\left( \varsigma \right) =&X_{0}+\frac{1}{\Gamma \left( \alpha \right) }\int _{a}^{\varsigma }\left( \psi \left( \varsigma \right) -\psi \left( s\right) \right) ^{\alpha -1}\vartheta _{1}\left( s,X\left( s\right) ,X(a+\lambda s)\right) \psi ^{\prime }\left( s\right) ds\nonumber \\&+\frac{1}{\Gamma \left( \alpha \right) }\int _{a}^{\varsigma }\left( \psi \left( \varsigma \right) -\psi \left( s\right) \right) ^{\alpha -1}\vartheta _{2}\left( s,X\left( s\right) ,X(a+\lambda s)\right) \psi ^{\prime }\left( s\right) dB\left( s\right) , \end{aligned}$$
(2.1)

where \(X\left( \varsigma \right) \) is \({\mathbb {F}}\left( \varsigma \right) \)-adapted and E\(\left( \int _{a}^{{\mathcal {T}}}\left| X\left( \varsigma \right) \right| ^{2}d\varsigma \right) <\infty .\)

3 An Averaging Principle

In this part, the combination of existence and uniqueness in the next part gives us a demonstration and proof in the principle of the average of \(\psi \)-Caputo FSDPEs. Let’s consider the usual form of Eq. (1.1):

$$\begin{aligned} X_{\epsilon }\left( \varsigma \right)&=X_{0}+\frac{\epsilon }{\Gamma \left( \alpha \right) }\int _{a}^{\varsigma }\left( \psi \left( \varsigma \right) -\psi \left( s\right) \right) ^{\alpha -1}\vartheta _{1}\left( s,X_{\epsilon }\left( s\right) ,X_{\epsilon }(a+\lambda s)\right) \psi ^{\prime }\left( s\right) ds\nonumber \\&+\frac{\sqrt{\epsilon }}{\Gamma \left( \alpha \right) }\int _{a}^{\varsigma }\left( \psi \left( \varsigma \right) -\psi \left( s\right) \right) ^{\alpha -1}\vartheta _{2}\left( s,X_{\epsilon }\left( s\right) ,X_{\epsilon }(a+\lambda s)\right) \psi ^{\prime }\left( s\right) dB\left( s\right) , \end{aligned}$$
(3.1)

where the initial value \(X_{0}\), coefficients \(\vartheta _{1}\) and \(\vartheta _{2}\) similar conditions as in Eq. (1.1), and noted by \(\epsilon _{0}\) a fixed number, \(\epsilon \in \left[ 0,\epsilon _{0}\right] \) is a positive small parameter.

Before we continue with the average principle, we offer some measurable coefficients, \(\overline{\vartheta _{1}}: {\mathbb {R}} ^{n}\rightarrow {\mathbb {R}} ^{n},\overline{\vartheta _{2}}: {\mathbb {R}} ^{n}\rightarrow {\mathbb {R}} ^{n},\) satisfing \((\Lambda 1)\) and additive inequalities:

\((\Lambda 2)\) For any \({\mathcal {T}}_{1}\in \left[ a,{\mathcal {T}}\right] ,\) \(x,y\in {\mathbb {R}} ^{n}\), there existe two positive bounded functions \(\alpha _{i}({\mathcal {T}} _{1}),i=1,2\) such that

$$\begin{aligned} \frac{1}{\psi \left( {\mathcal {T}}_{1}\right) -\psi \left( a\right) }\int _{a}^{{\mathcal {T}}_{1}}\left| \vartheta _{1}(s,x,y)-\overline{\vartheta _{1} }(x,y)\right| \psi ^{\prime }\left( s\right) ds&\le \alpha _{1}({\mathcal {T}}_{1})(1+\left| x\right| +\left| y\right| ),\\ \frac{1}{\psi \left( {\mathcal {T}}_{1}\right) -\psi \left( a\right) }\int _{a}^{{\mathcal {T}}_{1}}\left| \vartheta _{2}(s,x,y)-\overline{\vartheta _{2} }(x,y)\right| ^{2}\psi ^{\prime }\left( s\right) ds&\le \alpha _{2}({\mathcal {T}}_{1})(1+\left| x\right| ^{2}+\left| y\right| ^{2}), \end{aligned}$$

where \({\lim }_{{\mathcal {T}}_{1}\rightarrow \infty }\alpha _{i} ({\mathcal {T}}_{1})=0.\)

With sufficient help above, we explain that the original solution \(X_{\epsilon }(\varsigma )\) converges to \(Z_{\epsilon }(\varsigma )\), as \(\epsilon \rightarrow 0.\)

$$\begin{aligned} Z_{\epsilon }(\varsigma )&=X_{0}+\frac{\epsilon }{\Gamma \left( \alpha \right) }\int _{a}^{\varsigma }(\psi \left( \varsigma \right) -\psi \left( s\right) )^{\alpha -1}\overline{\vartheta _{1}}(Z_{\epsilon }(s),Z_{\epsilon }(a+\lambda s))\psi ^{\prime }\left( s\right) ds\nonumber \\&\quad +\frac{\sqrt{\epsilon }}{\Gamma \left( \alpha \right) }\int _{a}^{\varsigma }(\psi \left( \varsigma \right) -\psi \left( s\right) )^{\alpha -1} \overline{\vartheta _{2}}(Z_{\epsilon }(s),Z_{\epsilon }(a+\lambda s))\psi ^{\prime }\left( s\right) dB\left( s\right) . \end{aligned}$$
(3.2)

Now we touch on the main finding in this research.

Theorem 3.1

Assume that condition \((\Lambda 1)-(\Lambda 2)\) are satisfied. For a given arbitrarily small number \(\delta _{1}>0\) there exists \(L>a,\) \(\epsilon _{1}\in \left( 0,\epsilon _{0}\right] \) and \(\beta \in \left( 0,1\right) \) such that for all \(\epsilon \in \left( 0,\epsilon _{1}\right] ,\)

$$\begin{aligned} E \left( \sup _{\varsigma \epsilon \left[ a,L^{\epsilon -^{\beta }}\right] }\left| X_{\epsilon }\left( \varsigma \right) -Z_{\epsilon }\left( \varsigma \right) \right| ^{2}\right) \le \delta _{1}. \end{aligned}$$
(3.3)

Proof

For any \(\varsigma \in \left[ a,u\right] \subset \left[ a,{\mathcal {T}}\right] ,\)

$$\begin{aligned}&X_{\epsilon }\left( \varsigma \right) -Z_{\epsilon }\left( \varsigma \right) \nonumber \\&\quad =\frac{\epsilon }{\Gamma \left( \alpha \right) }\int _{a}^{\varsigma }\left( \psi \left( \varsigma \right) -\psi \left( s\right) \right) ^{\alpha -1}\left[ \vartheta _{1}\left( s,X_{\epsilon }\left( s\right) ,X_{\epsilon }\left( a+\lambda s\right) \right) \right. \nonumber \\&\qquad \left. -\overline{\vartheta _{1}}\left( Z_{\epsilon }\left( s\right) ,Z_{\epsilon }(a+\lambda s)\right) \right] \psi ^{\prime }\left( s\right) ds\nonumber \\&\qquad +\frac{\sqrt{\epsilon }}{\Gamma \left( \alpha \right) }\int _{a}^{\varsigma }\left( \psi \left( \varsigma \right) -\psi \left( s\right) \right) ^{\alpha -1}\left[ \vartheta _{2}\left( s,X_{\epsilon }\left( s\right) ,X_{\epsilon }\left( a+\lambda s\right) \right) \right. \nonumber \\&\qquad \left. -\overline{\vartheta _{2} }\left( Z_{\epsilon }\left( s\right) ,Z_{\epsilon }(a+\lambda s)\right) \right] \psi ^{\prime }\left( s\right) dB\left( s\right) . \end{aligned}$$
(3.4)

Using the elementary inequality

$$\begin{aligned} \left| x_{1}+x_{2}\right| ^{2}\le 2(\left| x_{1}\right| ^{2}+\left| x_{2}\right| ^{2}), \end{aligned}$$
(3.5)

we have

$$\begin{aligned}&E \left( \sup _{a\le \varsigma \le u}\left| X_{\epsilon }\left( \varsigma \right) -Z_{\epsilon }\left( \varsigma \right) \right| ^{2}\right) \nonumber \\&\quad \le \frac{2\epsilon ^{2}}{\Gamma \left( \alpha \right) ^{2}}{} E \sup _{a\le \varsigma \le u}\left| \int _{a}^{\varsigma }\left( \psi \left( \varsigma \right) -\psi \left( s\right) \right) ^{\alpha -1}\left[ \vartheta _{1}\left( s,X_{\epsilon }\left( s\right) ,X_{\epsilon }\left( a+\lambda s\right) \right) \right. \right. \nonumber \\&\qquad -\left. \left. \overline{\vartheta _{1}}\left( Z_{\epsilon } (s),Z_{\epsilon }(a+\lambda s)\right) \psi ^{\prime }\left( s\right) ds\right] \right| ^{2}\nonumber \\&\qquad +\frac{2\epsilon }{\Gamma \left( \alpha \right) ^{2}}{} E \sup _{a\le \varsigma \le u}\left| \int _{a}^{\varsigma }\left( \psi \left( \varsigma \right) -\psi \left( s\right) \right) ^{\alpha -1}\left[ \vartheta _{2}\left( s,X_{\epsilon }\left( s\right) ,X_{\epsilon }\left( a+\lambda s\right) \right) \right. \right. \nonumber \\&\qquad -\left. \left. \overline{\vartheta _{2}}\left( Z_{\epsilon }\left( s\right) ,Z_{\epsilon }(a+\lambda s)\right) \psi ^{\prime }\left( s\right) dB\left( s\right) \right] \right| ^{2}\nonumber \\&=J_{1}+J_{2}. \end{aligned}$$
(3.6)

Recalling inequality (3.5), we get

$$\begin{aligned} J_{1}&\le \frac{4\epsilon ^{2}}{\Gamma \left( \alpha \right) ^{2}} E \underset{a\le \varsigma \le u}{\sup }\left| \int _{a}^{\varsigma }\left( \psi \left( \varsigma \right) -\psi \left( s\right) \right) ^{\alpha -1}\left[ \vartheta _{1}\left( s,X_{\epsilon }\left( s\right) ,X_{\epsilon }\left( a+\lambda s\right) \right) \right. \right. \nonumber \\&\quad -\left. \left. \vartheta _{1}\left( Z_{\epsilon }(s),Z_{\epsilon }(a+\lambda s)\right) \right] \psi ^{\prime }\left( s\right) ds\right| ^{2}\nonumber \\&\quad +\frac{4\epsilon }{\Gamma \left( \alpha \right) ^{2}}{} E \underset{a\le \varsigma \le u}{\sup }\left| \int _{a}^{\varsigma }\left( \psi \left( \varsigma \right) -\psi \left( s\right) \right) ^{\alpha -1}\left[ \vartheta _{1}\left( s,Z_{\epsilon }\left( s\right) ,Z_{\epsilon }\left( 1+\lambda s\right) \right) \right. \right. \nonumber \\&\quad -\left. \left. \overline{\vartheta _{1}}\left( Z_{\epsilon }\left( s\right) ,Z_{\epsilon }(1+\lambda s)\right) \right] \psi ^{\prime }\left( s\right) ds\right| ^{2}\nonumber \\&=J_{11}+J_{12}. \end{aligned}$$
(3.7)

Assuggest \((\Lambda 1)\) and inequality of Cauchy–Schwarz applied, we get

$$\begin{aligned} J_{11}\le & {} K_{11}\epsilon ^{2}(\psi \left( u\right) -\psi \left( a\right) )\int _{a}^{u}(\psi \left( u\right) \nonumber \\{} & {} -\psi \left( s\right) )^{2\alpha -2}{} E \left( \sup _{a\le s_{1}\le s}\left| X_{\epsilon }\left( s_{1}\right) -Z_{\epsilon }\left( s_{1}\right) \right| ^{2}\psi ^{\prime }\left( s\right) ds\right) , \end{aligned}$$
(3.8)

where \(K_{11}=\dfrac{8\left( C_{2}^{2}+C_{3}^{2}\right) }{\Gamma \left( \alpha \right) ^{2}}.\) By the definition of upper limit integration,

$$\begin{aligned} J_{12}&\le \frac{4\epsilon ^{2}}{\Gamma \left( \alpha \right) ^{2}} E \sup _{a\le \varsigma \le u}\left| \int _{a}^{\varsigma }\left( \psi \left( \varsigma \right) -\psi \left( s\right) \right) ^{\alpha -1}d\left[ \int _{1}^{s}\vartheta _{1}\left( \tau ,Z_{\epsilon }\left( \tau \right) ,Z_{\epsilon }\left( a+\lambda \tau \right) \right) \right. \right. \nonumber \\&\quad -\left. \left. \overline{\vartheta _{1}}\left( Z_{\epsilon } (\tau ),Z_{\epsilon }\left( a+\lambda \tau \right) \right) \psi ^{\prime }\left( \tau \right) d\tau \right] \right| ^{2}, \end{aligned}$$
(3.9)

integration by part is used,

$$\begin{aligned} J_{12}&\le \frac{4\epsilon ^{2}(\alpha -1)^{2}}{\Gamma \left( \alpha \right) ^{2}}{} E \sup _{a\le \varsigma \le u}\left| \int _{a}^{\varsigma }\left( \int _{a}^{s}\vartheta _{1}\left( \tau ,Z_{\epsilon }\left( \tau \right) ,Z_{\epsilon }\left( a+\lambda \tau \right) \right) \right. \right. \nonumber \\&\quad -\left. \left. \overline{\vartheta _{1}}\left( Z_{\epsilon } (\tau ),Z_{\epsilon }\left( a+\lambda \tau \right) \right) \psi ^{\prime }\left( \tau \right) d\tau \right) (\psi \left( \varsigma \right) -\psi \left( s\right) )^{\alpha -2}\psi ^{\prime }\left( s\right) ds\right| ^{2}, \end{aligned}$$
(3.10)

then together with the hypothesis \(\left( \Lambda 2\right) \) and Cauchy-Schwarz inequality, we get

$$\begin{aligned} J_{12}&\le \frac{4\epsilon ^{2}\left( \alpha -1\right) ^{2}\left( \psi \left( u\right) -\psi \left( a\right) \right) ^{2\alpha -3}}{\left( 2\alpha -3\right) \Gamma \left( \alpha \right) ^{2}}\nonumber \\&\quad \times E \int _{a}^{u}\left| \int _{a}^{s}\vartheta _{1}\left( \tau ,Z_{\epsilon }\left( \tau \right) ,Z_{\epsilon }\left( a+\lambda \tau \right) \right) \right. \nonumber \\&\quad \left. -\overline{\vartheta _{1}}\left( Z_{\epsilon }\left( \tau \right) ,Z_{\epsilon }\left( a+\lambda \tau \right) \right) \psi ^{\prime }\left( \tau \right) d\tau \right| ^{2}\psi ^{\prime }\left( s\right) ds\nonumber \\&\le K_{12}\epsilon ^{2}\left( \psi \left( u\right) -\psi \left( a\right) \right) ^{2\alpha }, \end{aligned}$$
(3.11)

in which

$$\begin{aligned} K_{12}&=\dfrac{4(\alpha -1)^{2}}{(2\alpha -3)\Gamma (\alpha )^{2}}\sup _{a\le \varsigma \le u}\alpha _{1}(\varsigma )^{2}\left[ 1+E \left( \sup _{a\le \tau \le u}\left| Z_{\epsilon }\left( \tau \right) \right| ^{2}\right) \right. \nonumber \\&\quad +\left. E \left( \sup _{a\le \tau \le u}\left| Z_{\epsilon }\left( a+\lambda \tau \right) \right| ^{2}\right) \right] . \end{aligned}$$
(3.12)

For the second term, similar way,

$$\begin{aligned} J_{2}&\le \frac{4\epsilon ^{2}}{\Gamma \left( \alpha \right) ^{2}} E \sup _{a\le \varsigma \le u}\left| \int _{a}^{\varsigma }\left( \psi \left( \varsigma \right) -\psi \left( s\right) \right) ^{\alpha -1}\left[ \vartheta _{2}\left( s,X_{\epsilon }\left( s\right) ,X_{\epsilon }\left( a+\lambda s\right) \right) \right. \right. \nonumber \\&\quad -\left. \left. \vartheta _{2}\left( s,Z_{\epsilon }(s),Z_{\epsilon }\left( a+\lambda s\right) \right) \right] \psi ^{\prime }\left( s\right) dB\left( s\right) \right| ^{2}\nonumber \\&\quad +\frac{4\epsilon }{\Gamma \left( \alpha \right) ^{2}}{} E \sup _{a\le \varsigma \le u}\left| \int _{a}^{\varsigma }\left( \psi \left( \varsigma \right) -\psi \left( s\right) \right) ^{\alpha -1}\left[ \vartheta _{2}\left( s,Z_{\epsilon }\left( s\right) ,Z_{\epsilon }\left( a+\lambda s\right) \right) \right. \right. \nonumber \\&\quad -\left. \left. \overline{\vartheta _{2}}\left( Z_{\epsilon }\left( s\right) ,Z_{\epsilon }\left( a+\lambda s\right) \right) \right] \psi ^{\prime }\left( s\right) dB\left( s\right) \right| ^{2}\nonumber \\&=J_{21}+J_{22}. \end{aligned}$$
(3.13)

Exploiting inequality of Doob’s martingale, Itô’s formula and condition \(\left( \Lambda 1\right) \),

$$\begin{aligned} J_{21}&\le \frac{4\epsilon }{\Gamma \left( \alpha \right) ^{2}}{} E \int _{a}^{u}\left( \psi \left( u\right) -\psi \left( s\right) \right) ^{2\alpha -2}\left| \vartheta _{2}\left( s,X_{\epsilon }\left( s\right) ,X_{\epsilon }\left( a+\lambda s\right) \right) \right. \nonumber \\&\quad -\left. \vartheta _{2}\left( s,Z_{\epsilon }\left( s\right) ,Z_{\epsilon }\left( a+\lambda s\right) \right) \right| ^{2}\psi ^{\prime }\left( s\right) ds\nonumber \\&\le K_{21}\epsilon \int _{a}^{u}\left( \psi \left( u\right) -\psi \left( s\right) \right) ^{2a-2}{} E \left( \sup _{1\le s_{1}\le s}\left| X_{\epsilon }\left( s_{1}\right) -Z_{\epsilon }\left( s_{1}\right) \right| ^{2}\right) \psi ^{\prime }\left( s\right) ds, \end{aligned}$$
(3.14)

where \(K_{21}=\dfrac{8\left( C_{2}^{2}+C_{3}^{2}\right) }{\Gamma \left( \alpha \right) ^{2}}.\) Reuse again,

$$\begin{aligned} J_{22}&\le \frac{4\epsilon }{\Gamma \left( \alpha \right) ^{2}}{} E \int _{a}^{u}\left( \psi \left( u\right) -\psi \left( s\right) \right) ^{2\alpha -2}\left| \vartheta _{2}\left( s,X_{\epsilon }\left( s\right) ,X_{\epsilon }\left( a+\lambda s\right) \right) \right. \nonumber \\&\quad -\left. \overline{\vartheta _{2}}\left( Z_{\epsilon }\left( s\right) ,Z_{\epsilon }\left( a+\lambda s\right) \right) \right| ^{2}\psi ^{\prime }\left( s\right) ds. \end{aligned}$$
(3.15)

Integrating by parts, produces

$$\begin{aligned} J_{22}&\le \frac{4\epsilon }{\Gamma \left( \alpha \right) ^{2}}{} E \int _{a}^{u}\left( \psi \left( u\right) -\psi \left( s\right) \right) ^{2\alpha -2}d\left[ \int _{a}^{s}\left| \vartheta _{2}\left( \tau ,Z_{\epsilon }\left( \tau \right) ,Z_{\epsilon }\left( a+\lambda \tau \right) \right) \right. \right. \nonumber \\&\quad -\left. \left. \overline{\vartheta _{2}}\left( Z_{\epsilon } (\tau ),Z_{\epsilon }\left( a+\lambda \tau \right) \right) \right| ^{2} \psi ^{\prime }\left( \tau \right) d\tau \right] \nonumber \\&\le \frac{4\epsilon (2\alpha -2)}{\Gamma \left( \alpha \right) ^{2}} E \int _{a}^{u}\left( \int _{a}^{s}\left| \vartheta _{2}\left( \tau ,Z_{\epsilon }\left( \tau \right) ,Z_{\epsilon }\left( a+\lambda \tau \right) \right) \right. \right. \nonumber \\&\quad -\left. \left. \overline{\vartheta _{2}}\left( Z_{\epsilon } (\tau ),Z_{\epsilon }\left( a+\lambda \tau \right) \right) \right| ^{2} \psi ^{\prime }\left( \tau \right) d\tau \right) \left( \psi \left( u\right) -\psi \left( s\right) \right) ^{2\alpha -3}\psi ^{\prime }\left( s\right) ds, \end{aligned}$$
(3.16)

thanks to the hypothesis \(\left( \Lambda 2\right) ,\) we can conclude

$$\begin{aligned} J_{22}&\le \frac{4\epsilon (2\alpha -2)}{\Gamma \left( \alpha \right) ^{2} }{} E \int _{a}^{u}\left( \sup _{a\le s_{1}\le s}\alpha _{2}\left( s_{1}\right) \left[ 1+E \left( \sup _{a\le \tau \le s}\left| Z_{\epsilon }\left( \tau \right) \right| ^{2}\right) \right. \right. \nonumber \\&\quad +\left. \left. E \left( \sup _{a\le \tau \le s}\left| Z_{\epsilon }\left( a+\lambda \tau \right) \right| ^{2}\right) \right] \right) \left( \psi \left( s\right) -\psi \left( a\right) \right) \left( \psi \left( u\right) -\psi \left( s\right) \right) ^{2\alpha -3}\psi ^{\prime }\left( s\right) ds\nonumber \\&\le K_{22}\epsilon \left( \psi \left( u\right) -\psi \left( a\right) \right) ^{2\alpha -1}, \end{aligned}$$
(3.17)

where

$$\begin{aligned} K_{22}&=\frac{3(2\alpha -2)}{\alpha (2\alpha -1)\Gamma \left( \alpha \right) ^{2}}\sup _{a\le \varsigma \le u}\alpha _{2}\left( \varsigma \right) \left[ 1+E \left( \sup _{a\le \varsigma \le u}\left| Z_{\epsilon }\left( \tau \right) \right| ^{2}\right) \right. \nonumber \\&\quad +\left. E \left( \sup _{a\le \varsigma \le u}\left| Z_{\epsilon }\left( a+\lambda \tau \right) \right| ^{2}\right) \right] . \end{aligned}$$
(3.18)

Now, plug Eqs. (3.8)–(3.17) in to (3.6), for each \(u\in \left[ a,{\mathcal {T}}\right] ,\) we reach

$$\begin{aligned}&E \left( \sup _{a\le \varsigma \le u}\left| X_{\epsilon }\left( \varsigma \right) \right| ^{2}\right) \le K_{12}\epsilon ^{2}u^{2\alpha }+K_{22}\epsilon u^{2\alpha -1}\nonumber \\&\qquad +\left( K_{11}\epsilon ^{2}u+K_{21}\epsilon \right) \int _{a}^{u}\left( \psi \left( u\right) -\psi \left( s\right) \right) ^{\left( 2\alpha -1\right) -1}\nonumber \\&\qquad E \left( \sup _{a\le s_{1}\le s}\left| X_{\epsilon }\left( s_{1}\right) -Z_{\epsilon }\left( s_{1}\right) \right| ^{2}\right) \psi ^{\prime }\left( s\right) ds, \end{aligned}$$
(3.19)

with Gronwall–Bellman inequality [32], we get

$$\begin{aligned}&E \left( \sup _{a\le \varsigma \le u}\left| X_{\epsilon }\left( \varsigma \right) -Z_{\epsilon }\left( \varsigma \right) \right| ^{2}\right) \nonumber \\&\quad \le \left( K_{12}\epsilon ^{2}\left( \psi \left( u\right) -\psi \left( a\right) \right) ^{2\alpha }+K_{22}\epsilon \left( \psi \left( u\right) -\psi \left( a\right) \right) ^{2\alpha -1}\right) \nonumber \\&\qquad \times {\textstyle \sum \limits _{k=0}^{\infty }} \frac{\left( \left( K_{11}\epsilon ^{2}\left( \psi \left( u\right) -\psi \left( a\right) \right) ^{2\alpha }+K_{21}\epsilon \left( \psi \left( u\right) -\psi \left( a\right) \right) ^{2\alpha -1}\right) \Gamma \left( 2\alpha -1\right) \right) ^{k}}{\Gamma \left( k\left( 2\alpha -1\right) +1\right) }. \end{aligned}$$
(3.20)

This implies that we can select \(\beta \in \left( 0,1\right) \) and \(L>a,\) for every \(\varsigma \in \left[ a,L^{\epsilon ^{-\beta }}\right] \subseteq \left[ a,{\mathcal {T}}\right] \) having

$$\begin{aligned} E \left( \sup _{a\le \varsigma \le L^{\epsilon ^{-\beta }}}\left| X_{\epsilon }\left( \varsigma \right) -Z_{\epsilon }\left( \varsigma \right) \right| ^{2}\right) \le C\epsilon ^{1-\beta }, \end{aligned}$$
(3.21)

where

$$\begin{aligned} C&=\left( K_{12}\left( \psi \left( L\right) -\psi \left( a\right) \right) ^{2\alpha }\epsilon ^{1+\beta -2\alpha \beta }+K_{22}\left( \psi \left( L\right) -\psi \left( a\right) \right) ^{2\alpha -1}\epsilon ^{2\beta \left( 1-\alpha \right) }\right) \nonumber \\&\quad \times {\textstyle \sum \limits _{k=0}^{\infty }} \frac{\left( \left( K_{11}\left( \psi \left( L\right) -\psi \left( a\right) \right) ^{2\alpha }\epsilon ^{2\left( 1-\alpha \beta \right) } +K_{21}\left( \psi \left( L\right) -\psi \left( a\right) \right) ^{2\alpha -1}\epsilon ^{1+\beta \left( 1-2\alpha \right) }\right) \Gamma \left( 2\alpha -1\right) \right) ^{k}}{\Gamma \left( k\left( 2\alpha -1\right) +1\right) }, \end{aligned}$$
(3.22)

is a constant. Hence, for any given number \(\delta _{1,}\)there exists \(\epsilon _{1}\in \left( 0,\epsilon _{0}\right] \) such that for each \(\epsilon \in \left( 0,\epsilon _{1}\right] \) and \(\varsigma \in \left[ a,L^{\epsilon ^{-\beta }}\right] \) having

$$\begin{aligned} {E}\left( \sup _{a\le \varsigma \le L^{\epsilon ^{-\beta }} }\left| X_{\epsilon }\left( \varsigma \right) -Z_{\epsilon }\left( \varsigma \right) \right| ^{2}\right) \le \delta _{1}. \end{aligned}$$
(3.23)

\(\square \)

4 Example

Let \(\psi \left( \varsigma \right) =\log \varsigma ,\) consider the FSDPEs

$$\begin{aligned} \left\{ \begin{array}{llll} {\mathfrak {D}}_{a}^{\alpha ;\psi }X_{\epsilon }(\varsigma )=\epsilon \left( \log \left( 1+\left| X_{\epsilon }\left( \varsigma \right) \right| \right) +\sin \left( X_{\epsilon }\left( a+\lambda \varsigma \right) \right) \right) \log ^{3}(\varsigma )\\ +\sqrt{\epsilon }\psi ^{\prime }\left( \varsigma \right) dB\left( \varsigma \right) ,\text { }\varsigma \in \left[ 1,e\right] ,\\ X(1)=0, \end{array} \right. \end{aligned}$$
(4.1)

where \(a=1,{\mathcal {T}}=e,\lambda \in \left( 0,\frac{e-1}{e}\right) ,\alpha \in \left( \frac{1}{2},1\right) .\) The coefficients

$$\begin{aligned} \vartheta _{1}(\varsigma ,X_{\epsilon },Y_{\epsilon })=\left[ \log \left( 1+\left| X_{\epsilon }\left( \varsigma \right) \right| \right) +\sin \left( Y_{\epsilon }\left( a+\lambda \varsigma \right) \right) \right] \log ^{3}(\varsigma ) \end{aligned}$$

and

$$\begin{aligned} \vartheta _{2}(\varsigma ,X_{\epsilon },Y_{\epsilon })=1, \end{aligned}$$

fulfillment of conditions \((\Lambda 1),\) so there has a unique solution to FSDPEs (4.1).

Define

$$\begin{aligned} \overline{\vartheta _{1}}(X_{\epsilon },Y_{\epsilon })= & {} \int _{1}^{e}\vartheta _{1}(\varsigma ,X_{\epsilon },Y_{\epsilon })\psi ^{\prime }\left( \varsigma \right) d\varsigma \\= & {} \frac{1}{4}\left( \log \left( 1+\left| X_{\epsilon }\right| \right) +\sin \left( Y_{\epsilon }\right) \right) ,\overline{\vartheta _{2}}(X_{\epsilon },Y_{\epsilon })=1, \end{aligned}$$

logically we see that \((\Lambda 2)\) achieve, so we get the expected form of (4.1) is

$$\begin{aligned}{} & {} {\mathfrak {D}}_{a}^{\alpha ;\psi }Z_{\epsilon }(\varsigma )=\epsilon \left[ \log \left( 1+\left| X_{\epsilon }\left( \varsigma \right) \right| \right) +\sin \left( X_{\epsilon }\left( a+\lambda \varsigma \right) \right) \right] \nonumber \\{} & {} \quad \log ^{3}(\varsigma )+\sqrt{\epsilon }\psi ^{\prime }\left( \varsigma \right) dB\left( \varsigma \right) ,\text { }Z_{\epsilon } (a)=X_{0}. \end{aligned}$$
(4.2)

From Theorem 3.1, when \(\epsilon \rightarrow 0\), the solution \(X_{\epsilon }\left( \varsigma \right) \) and \(Z_{\epsilon }(\varsigma )\) to Eqs. (4.1) and (4.2) are similar in the sense of mean square.

5 Conclusion

Numerous studies have investigated the application of the averaging principle to fractional stochastic differential equations, with a focus on approximating solutions through mean square averaging. In our groundbreaking work (1.1), we specifically addressed a unique category of \(\psi \)-Caputo fractional stochastic differential equations (FSDPEs) driven by a Brownian motion. By adhering to the aforementioned two conditions, we successfully achieved the desired outcome. Additionally, we extended the classical Khasminskii approach to encompass \(\psi \)-Caputo FSDPEs. As we outline in our forthcoming research:

  1. 1.

    Exploration of variable-order fractional differential equations and their modeling.

  2. 2.

    Investigating the averaging principle with impulsive processes.