Skip to main content
Log in

Optimal Recovery of a Square Integrable Function from Its Observations with Gaussian Errors

  • STOCHASTIC SYSTEMS
  • Published:
Automation and Remote Control Aims and scope Submit manuscript

Abstract

This paper is devoted to the mean-square optimal stochastic recovery of a square integrable function with respect to the Lebesgue measure defined on a finite-dimensional compact set. We justify an optimal recovery procedure for such a function observed at each point of its compact domain with Gaussian errors. The existence of the optimal stochastic recovery procedure as well as its unbiasedness and consistency are established. In addition, we propose and justify a near-optimal stochastic recovery procedure in order to: (i) estimate the dependence of the standard deviation on the number of orthogonal functions and the number of observations and (ii) find the number of orthogonal functions that minimizes the standard deviation.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

REFERENCES

  1. Borovkov, A.A., Matematicheskaya statistika, Novosibirsk: Nauka, 1997. Translated under the title Mathematical Statistics, 1st ed., Gordon and Breach, 1999.

  2. Ivchenko, G.I. and Medvedev, Yu.I., Vvedenie v matematicheskuyu statistiku, Moscow: LKI, 2010. Translated under the title Mathematical Statistics, URSS, 1990.

  3. Parzen, E., On Estimation of a Probability Density Function and Mode, Ann. Math. Statist., 1962, vol. 33, no. 3, pp. 1065–1076. https://doi.org/10.1214/aoms/1177704472

    Article  MathSciNet  MATH  Google Scholar 

  4. Rosenblatt, M., Curve Estimates, Ann. Math. Statist., 1971, vol. 42, no. 6, pp. 1815–1842. https://doi.org/10.1214/aoms/1177693050

    Article  MathSciNet  MATH  Google Scholar 

  5. Murthy, V.K., Nonparametric Estimation of Multivariate Densities with Applications, in Multivariate Analysis, 1966, pp. 43–56.

  6. Stratonovich, R.L., The Efficiency of Mathematical Statistics Methods in the Design of Algorithms to Recover an Unknown Function, Izv. Akad. Nauk SSSR. Tekh. Kibern., 1969, no. 1, pp. 32–46.

  7. Watson, G.S., Density Estimation by Orthogonal Series, Ann. Math. Statist., 1969, vol. 40, no. 4, pp. 1496–1498. https://doi.org/10.1214/aoms/1177697523

    Article  MathSciNet  MATH  Google Scholar 

  8. Konakov, V.D., Non-Parametric Estimation of Density Functions, Theory of Probability & Its Applications, 1973, vol. 17, iss. 2, pp. 361–362. https://doi.org/10.1137/1117042

    Article  MATH  Google Scholar 

  9. Chentsov, N.N., Statisticheskie reshayushchie pravila i optimal’nye vyvody, Moscow: Fizmatlit, 1972. Translated under the title Statistical Decision Rules and Optimal Inference, American Mathematical Society, 1982.

  10. Vapnik, V.N., Vosstanovlenie zavisimostei po empiricheskim dannym, Moscow: Nauka, 1979. Translated under the title Estimation of Dependences Based on Empirical Data, New York: Springer, 2010.

  11. Ibragimov, I.A. and Has’minskii, R.Z., Assimptoticheskaya teoriya otsenivaniya, Moscow: Nauka, 1979. Translated under the title Statistical Estimation: Asymptotic Theory, New York: Springer, 1981.

  12. Nadaraya, E.A., Neparametricheskoe otsenivanie plotnostei veroyatnostei i krivoi regressii, Tbilisi: Tbilissk. Gos. Univ., 1983. Translated under the title Nonparametric Estimation of Probability Densities and Regression Curves, Dordrecht: Springer, 1988. https://doi.org/10.1007/978-94-009-2583-0

  13. Nemirovskij, A.S., Polyak, B.T., and Tsybakov, A.B., Signal Processing by the Nonparametric Maximum-Likelihood Method, Problems of Information Transmission, 1984, vol. 20, no. 3, pp. 177–192.

    MathSciNet  MATH  Google Scholar 

  14. Darkhovskii, B.S., On a Stochastic Renewal Problem, Theory of Probability & Its Applications, 1999, vol. 43, no. 2, pp. 282–288. https://doi.org/10.1137/S0040585X9797688X

    Article  MathSciNet  Google Scholar 

  15. Darkhovsky, B.S., Stochastic Recovery Problem, Problems of Information Transmission, 2008, vol. 44, no. 4, pp. 303–314. https://doi.org/10.1134/S0032946008040030

    Article  MathSciNet  MATH  Google Scholar 

  16. Ibragimov, I.A., Estimation of Multivariate Regression, Theory of Probability & Its Applications, 2004, vol. 48, no. 2, pp. 256–272. https://doi.org/10.1137/S0040585X9780385

    Article  MathSciNet  MATH  Google Scholar 

  17. Tsybakov, A.B., Introduction to Nonparametric Estimation, New York: Springer, 2009.

    Book  MATH  Google Scholar 

  18. Bulgakov, S.A. and Khametov, V.M., Recovery of a Square Integrable Function from Observations with Gaussian Errors, Upravl. Bol’sh. Sist., 2015, vol. 54, pp. 45–65.

    Google Scholar 

  19. Levit, B., On Optimal Cardinal Interpolation, Mathematical Methods of Statistics, 2018, vol. 27, no. 4, pp. 245–267. https://doi.org/10.3103/S1066530718040014

    Article  MathSciNet  MATH  Google Scholar 

  20. Juditsky, A.B. and Nemirovski, A.S., Signal Recovery by Stochastic Optimization, Autom. Remote Control, 2019, vol. 80, no. 10, pp. 1878–1893. https://doi.org/10.1134/S0005231019100088

    Article  MathSciNet  MATH  Google Scholar 

  21. Golubev, G.K., On Adaptive Estimation of Linear Functionals from Observations against White Noise, Problems of Information Transmission, 2020, vol. 56, no. 2, pp. 185–200. https://doi.org/10.31857/S0555292320020047

    Article  MathSciNet  MATH  Google Scholar 

  22. Bulgakov, S.A., Gorshkova, V.M., and Khametov, V.M., Stochastic Recovery of Square-Integrable Functions, Herald of the Bauman Moscow State Technical University, 2020, vol. 93, no. 6, pp. 4–22. https://doi.org/10.18698/1812-3368-2020-6-4-22

    Article  Google Scholar 

  23. Shiryaev, A.N., Veroyatnost’, Moscow: Nauka, 1980. Translated under the title Probability, Graduate Texts in Mathematics, vol. 95, New York: Springer, 1996. https://doi.org/10.1007/978-1-4757-2539-1

  24. Kolmogorov, A.N. and Fomin, S.V., Elementy teorii funktsii i funktsional’nogo analiza, Moscow: Nauka, 1976, 4th. ed.  Translated under the title Elements of the Theory of Functions and Functional Analysis, Dover, 1957.

  25. Kashin, B.S. and Saakyan, A.A., Ortogonal’nye ryady (Orthogonal Series), Moscow: Nauka, 1984.

    Google Scholar 

  26. Bertsekas, D. and Shreve, S.E., Stochastic Optimal Control: The Discrete-Time Case, Athena Scientific, 1996.

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to S. A. Bulgakov or V. M. Khametov.

Additional information

This paper was recommended for publication O.N. Granichin, a member of the Editorial Board

Appendices

APPENDIX A

A.l. The proof of Proposition 2. Let f(x) ∈ L2(K, Λ) and \({{\hat {f}}_{m}}(x)\)\({{\mathbb{M}}_{{2,m}}}(\tilde {\textsf{P}})\) be some projection estimator. Since the system \({{\{ {{\varphi }_{j}}(x)\} }_{{j\; \geqslant \;0}}}\) is complete and orthonormal, from (1), (13), and Fubini’s theorem we have the equalities

$$\textsf{E}\int\limits_{\text{K}} {{{{\left[ {f(x) - {{{\hat {f}}}_{m}}(x)} \right]}}^{2}}dx} = \textsf{E}\sum\limits_{j = 0}^\infty {{{{\left[ {{{c}_{j}} - {{{\hat {c}}}_{{j,m}}}} \right]}}^{2}}} = \sum\limits_{j = 0}^\infty {\textsf{E}{{{\left[ {{{c}_{j}} - {{{\hat {c}}}_{{j,m}}}} \right]}}^{2}}} $$

for any m\({{\mathbb{Z}}^{ + }}\)\0.

Due to Proposition 1,

$$\mathop {\inf }\limits_{{{{\hat {f}}}_{m}}(x) \in {{\mathbb{M}}_{{2,m}}}(\tilde {\textsf{P}})} \textsf{E}\int\limits_{\text{K}} {{{{\left[ {f(x) - {{{\hat {f}}}_{m}}(x)} \right]}}^{2}}dx} = \mathop {\inf }\limits_{{{{\hat {c}}}_{{j,m}}} \in {{{\tilde {\mathbb{M}}}}_{{2,m}}}(\textsf{P})} \sum\limits_{j = 0}^\infty {\textsf{E}{{{\left[ {{{c}_{j}} - {{{\hat {c}}}_{{j,m}}}} \right]}}^{2}}.} $$
(A.1)

Recall that \({{\tilde {\mathbb{M}}}_{{2,m}}}(\textsf{P})\) is the set of \(\mathcal{F}_{m}^{{{{y}^{j}}}}\)-measurable square-integrable random variables. Therefore, from (A.1) it follows that

$$\mathop {\inf }\limits_{{{{\hat {f}}}_{m}}(x) \in {{\mathbb{M}}_{{2,m}}}(\tilde {\textsf{P}})} \textsf{E}\int\limits_{\text{K}} {{{{\left[ {f(x) - {{{\hat {f}}}_{m}}(x)} \right]}}^{2}}dx} \; \geqslant \;\sum\limits_{j = 0}^\infty {\mathop {\inf }\limits_{{{{\hat {c}}}_{{j,m}}} \in {{{\tilde {\mathbb{M}}}}_{{2,m}}}(\textsf{P})} \textsf{E}{{{\left[ {{{c}_{j}} - {{{\hat {c}}}_{{j,m}}}} \right]}}^{2}}.} $$

Hence, the estimator \(\hat {c}_{{j,m}}^{0}\) is optimal if and only if

$$\mathop {\inf }\limits_{{{{\hat {c}}}_{{j,m}}} \in {{{\tilde {\mathbb{M}}}}_{{2,m}}}(\textsf{P})} \textsf{E}{{\left[ {{{c}_{j}} - {{{\hat {c}}}_{{j,m}}}} \right]}^{2}} = \textsf{E}{{\left[ {{{c}_{j}} - \hat {c}_{{j,m}}^{0}} \right]}^{2}}.$$

Thus, given the existence of \(\hat {c}_{{j,m}}^{0}\),

$$\mathop {\inf }\limits_{{{{\hat {f}}}_{m}}(x) \in {{\mathbb{M}}_{{2,m}}}(\tilde {\textsf{P}})} \textsf{E}\int\limits_{\text{K}} {{{{\left[ {f(x) - {{{\hat {f}}}_{m}}(x)} \right]}}^{2}}dx} \; \geqslant \;\sum\limits_{j = 0}^\infty {\textsf{E}{{{\left[ {{{c}_{j}} - \hat {c}_{{j,m}}^{0}} \right]}}^{2}}.} $$

In view of (A.1) and this inequality, we have

$$\mathop {\inf }\limits_{{{{\hat {f}}}_{m}}(x) \in {{\mathbb{M}}_{{2,m}}}(\tilde {\textsf{P}})} \textsf{E}\int\limits_{\text{K}} {{{{\left[ {f(x) - {{{\hat {f}}}_{m}}(x)} \right]}}^{2}}dx} \; \geqslant \;\sum\limits_{j = 0}^\infty {\textsf{E}{{{\left[ {{{c}_{j}} - \hat {c}_{{j,m}}^{0}} \right]}}^{2}}} $$
$$ = \mathop {\inf }\limits_{{{{\hat {c}}}_{{j,m}}} \in {{{\tilde {\mathbb{M}}}}_{{2,m}}}(\textsf{P})} \textsf{E}\sum\limits_{j = 0}^\infty {{{{\left[ {{{c}_{j}} - {{{\hat {c}}}_{{j,m}}}} \right]}}^{2}}} = \textsf{E}\int\limits_{\text{K}} {{{{\left[ {f(x) - \hat {f}_{m}^{0}(x)} \right]}}^{2}}dx,} $$

where \(\hat {f}_{m}^{0}(x) \triangleq \sum\nolimits_{j = 0}^\infty {\hat {c}_{{j,m}}^{0}{{\varphi }_{j}}} (x)\). The proof of this proposition is complete.

A.2. The proof of Theorem 1. By Proposition 2, there exists an optimal projection estimator \(\hat {f}_{m}^{0}(x) \in {{\mathbb{M}}_{{2,m}}}(\tilde {\textsf{P}})\) if and only if (19) holds. Therefore,

$$\mathop {\inf }\limits_{{{{\hat {f}}}_{m}}(x) \in {{\mathbb{M}}_{{2,m}}}(\tilde {\textsf{P}})} \textsf{E}\int\limits_{\text{K}} {{{{\left[ {f(x) - {{{\hat {f}}}_{m}}(x)} \right]}}^{2}}dx} = \textsf{E}\int\limits_{\text{K}} {{{{\left[ {f(x) - \hat {f}_{m}^{0}(x)} \right]}}^{2}}dx} .$$

The main content of Theorem 1 is Eqs. (20)(22).

To prove them, we consider \(\textsf{E}\int\limits_{\text{K}} {{{{[f(x) - \hat {f}_{m}^{0}(x)]}}^{2}}dx} \). From Proposition 2 (see formulas (A.1) and (25)) it follows that

$$\textsf{E}\int\limits_{\text{K}} {{{{\left[ {f(x) - \hat {f}_{m}^{0}(x)} \right]}}^{2}}dx} = \sum\limits_{j = 0}^\infty {\textsf{E}{{{\left| {{{c}_{j}} - \hat {c}_{{j,m}}^{0}} \right|}}^{2}}.} $$
(A.2)

Hence, for each j\({{\bar {\mathbb{Z}}}^{ + }}\), it is required to construct a mean-square optimal estimate of the Fourier coefficient cj from the observations (\(y_{1}^{j}\), …, \(y_{m}^{j}\)). Note that due to (7), the random variable \(y_{m}^{j}\) has the Gaussian distribution: Law(\(y_{m}^{j}\)) = \(\mathcal{N}({{c}_{j}},\sigma _{j}^{2})\). As is well known [1, 2], in this case, the optimal estimate \(\hat {c}_{{j,m}}^{0}\) of the Fourier coefficient cj from the error-containing observations (\(y_{1}^{j}\), …, \(y_{m}^{j}\)) coincides with the maximum likelihood estimate. Thus, \(\hat {c}_{{j,m}}^{0}\) has the form (19). We multiply both sides of (19) by φj(x) and perform summation over all  j to obtain (18).

Now, we find the value \(\textsf{E}\int_{\text{K}} {{{{[f(x) - \hat {f}_{m}^{0}(x)]}}^{2}}dx} \). Due to (A.2), (7), (19) and Proposition 1, we have

$$\textsf{E}\int\limits_{\text{K}} {{{{\left[ {f(x) - \hat {f}_{m}^{0}(x)} \right]}}^{2}}dx} = \sum\limits_{j = 0}^\infty {\textsf{E}{{{\left| {\hat {c}_{{j,m}}^{0} - {{c}_{j}}} \right|}}^{2}}} $$
$$ = \sum\limits_{j = 0}^\infty {\textsf{E}{{{\left| {\frac{1}{m}\sum\limits_{k = 1}^m {y_{k}^{j}} - {{c}_{j}}} \right|}}^{2}}} = \sum\limits_{j = 0}^\infty {\textsf{E}{{{\left( {\frac{1}{m}\sum\limits_{k = 1}^m {n_{k}^{j}} } \right)}}^{2}} = \sum\limits_{j = 0}^\infty {\frac{{\sigma _{j}^{2}}}{m}.} } $$

The proof of this theorem is complete.

A.3. The proof of Corollary 1. From (20)–(22) and Fubini’s theorem we obtain (23) since

$$\hat {f}_{m}^{0}(x) = \sum\limits_{j = 0}^\infty {\hat {c}_{{j,m}}^{0}{{\varphi }_{j}}(x)} = \sum\limits_{j = 0}^\infty {\frac{1}{m}\sum\limits_{k = 1}^m {y_{k}^{j}{{\varphi }_{j}}(x)} } = \frac{1}{m}\sum\limits_{k = 1}^m {\sum\limits_{j = 0}^\infty {y_{k}^{j}{{\varphi }_{j}}(x)} } = \frac{1}{m}\sum\limits_{k = 1}^m {{{y}_{k}}(x).} $$

The proof of this corollary is complete.

A.4. The proof of Theorem 2. From (7), (20), and (21), by Fubini’s theorem, we have

$$\textsf{E}{\kern 1pt} \hat {f}_{m}^{0}(x) = \textsf{E}\sum\limits_{j = 0}^\infty {\hat {c}_{{j,m}}^{0}{{\varphi }_{j}}(x)} = \sum\limits_{j = 0}^\infty {{{\varphi }_{j}}(x)\textsf{E}{\kern 1pt} \hat {c}_{{j,m}}^{0}} = \sum\limits_{j = 0}^\infty {{{\varphi }_{j}}(x)\frac{1}{m}\textsf{E}} \sum\limits_{k = 1}^m {y_{k}^{j}} $$
$$ = \sum\limits_{j = 0}^\infty {{{\varphi }_{j}}(x)\frac{1}{m}\sum\limits_{k = 1}^m {\left( {{{c}_{j}} + \textsf{E}{\kern 1pt} n_{k}^{j}} \right)} } = \sum\limits_{j = 0}^\infty {{{\varphi }_{j}}(x){{c}_{j}}} = f(x)$$

for any x ∈ K and m\({{\mathbb{Z}}^{ + }}\)\0. The proof of this theorem is complete.

A.5. The proof of Theorem 3. It is required to establish that

$$\hat {f}_{m}^{0}(x)\;\xrightarrow[{m \to \infty }]{\textsf{P}}\;f(x)$$

for almost all x ∈ K. It suffices to demonstrate that the variance of the estimator \(\hat {f}_{m}^{0}(x)\) vanishes as m → ∞.

For each x ∈ K, we calculate the variance \(\textsf{D}{\kern 1pt} \hat {f}_{m}^{0}(x)\) of the estimator \(\hat {f}_{m}^{0}(x)\). From (20), (7), and (21), by Fubini’s theorem, we have

$$\textsf{D}{\kern 1pt} \hat {f}_{m}^{0}(x) = \textsf{E}{{\left[ {\hat {f}_{m}^{0}(x) - f(x)} \right]}^{2}} = \textsf{E}{{\left[ {\sum\limits_{j = 0}^\infty {\left( {{{{\hat {c}}}_{{j,m}}} - {{c}_{j}}} \right){{\varphi }_{j}}(x)} } \right]}^{2}}$$
$$ = \textsf{E}{{\left[ {\sum\limits_{j = 0}^\infty {\left( {\frac{1}{m}\sum\limits_{k = 1}^m {y_{k}^{j}} - {{c}_{j}}} \right){{\varphi }_{j}}(x)} } \right]}^{2}} = \textsf{E}{{\left[ {\sum\limits_{j = 0}^\infty {\frac{1}{m}\sum\limits_{k = 1}^m {n_{k}^{j}{{\varphi }_{j}}(x)} } } \right]}^{2}} = \frac{1}{m}\sum\limits_{j = 0}^\infty {\varphi _{j}^{2}(x)\sigma _{j}^{2}.} $$

Since the series \(\sum\limits_{j = 0}^\infty {\varphi _{j}^{2}(x)\sigma _{j}^{2}} \) converges for almost all x ∈ K, the latter equality yields the desired result. The proof of this theorem is complete.

APPENDIX B

B.1. The proof of Theorem 4. We begin with the first assertion. According to the definition of Vm(N),

$${{V}_{m}}(N) = \textsf{E}\int\limits_{\text{K}} {{{{\left[ {f(x) - {{f}_{N}}(x) + {{f}_{N}}(x) - \hat {f}_{{m,N}}^{0}(x)} \right]}}^{2}}dx} $$
$$ = \textsf{E}\int\limits_{\text{K}} {{{{\left[ {\sum\limits_{j = 0}^\infty {{{c}_{j}}{{\varphi }_{j}}(x)} - \sum\limits_{j = 0}^N {{{c}_{j}}{{\varphi }_{j}}(x)} + \sum\limits_{j = 0}^N {{{c}_{j}}{{\varphi }_{j}}(x)} - \sum\limits_{j = 0}^N {\hat {c}_{{j,m}}^{0}{{\varphi }_{j}}(x)} } \right]}}^{2}}dx} $$
$$ = \textsf{E}\int\limits_{\text{K}} {{{{\left[ {\sum\limits_{j = N + 1}^\infty {{{c}_{j}}{{\varphi }_{j}}(x)} + \sum\limits_{j = 0}^N {\left( {{{c}_{j}} - \hat {c}_{{j,m}}^{0}} \right){{\varphi }_{j}}(x)} } \right]}}^{2}}dx.} $$

Hence, for any m\({{\mathbb{Z}}^{ + }}\)\0 and N\({{\mathbb{Z}}^{ + }}\), the standard deviation Vm(N) is given by

$${{V}_{m}}(N) = \sum\limits_{j = N + 1}^\infty {c_{j}^{2}} + \textsf{E}\sum\limits_{j = 0}^N {{{{\left( {{{c}_{j}} - \hat {c}_{{j,m}}^{0}} \right)}}^{2}}.} $$
(B.1)

Since \(\hat {c}_{{j,m}}^{0} = \frac{1}{m}\sum\nolimits_{k = 1}^m {y_{k}^{j}} \), by (7), we have

$$\hat {c}_{{j,m}}^{0} = \frac{1}{m}\sum\limits_{k = 1}^m {\left( {{{c}_{j}} + n_{k}^{j}} \right)} = {{c}_{j}} + \frac{1}{m}\sum\limits_{k = 1}^m {n_{k}^{j}.} $$
(B.2)

In view of (B.2) and conditions (ni), i = \(\overline {1,3} \), by Fubini’s theorem, formula (B.l) reduces to (23). Indeed,

$$\begin{gathered} {{V}_{m}}(N) = \sum\limits_{j = N + 1}^\infty {c_{j}^{2}} + \textsf{E}\sum\limits_{j = 0}^N {{{{\left( {\frac{1}{m}\sum\limits_{k = 1}^m {n_{k}^{j}} } \right)}}^{2}}} \\ = \sum\limits_{j = N + 1}^\infty {c_{j}^{2}} + \sum\limits_{j = 0}^N {\frac{1}{{{{m}^{2}}}}\textsf{E}\sum\limits_{k = 1}^m {n_{k}^{j}} } \sum\limits_{k' = 1}^m {n_{{k'}}^{j}} \\ = \sum\limits_{j = N + 1}^\infty {c_{j}^{2}} + \sum\limits_{j = 0}^N {\frac{{\sigma _{j}^{2}}}{m}} = \sum\limits_{j = N + 1}^\infty {c_{j}^{2}} + \frac{1}{m}\sum\limits_{j = 0}^N {\sigma _{j}^{2}.} \\ \end{gathered} $$
(B.3)

Thus, the first assertion is true.

Now, we prove the second assertion of Theorem 4. According to the first assertion, for any N\({{\mathbb{Z}}^{ + }}\), the standard deviation Vm(N) of the estimator \(\hat {f}_{{m,N}}^{0}(x)\)\({{\mathbb{M}}_{{2,m}}}(\tilde {\textsf{P}})\) has the form (B.3).

For each m, it is required to derive a lower bound for Vm(N). Obviously, Vm(N) consists of two terms, namely, a monotonically decreasing sequence (the first term) and a monotonically increasing sequence (the second term). Therefore,

$$\begin{gathered} \mathop {\inf }\limits_{N \in {{{\bar {\mathbb{Z}}}}^{ + }}} {{V}_{m}}(N) = \mathop {\inf }\limits_{N \in {{{\bar {\mathbb{Z}}}}^{ + }}} \left( {\sum\limits_{j = N + 1}^\infty {c_{j}^{2}} + \sum\limits_{j = 0}^N {\frac{{\sigma _{i}^{2}}}{m}} } \right) \\ = \max \left( {\mathop {\lim }\limits_{N \to \infty } \sum\limits_{j = N + 1}^\infty {c_{j}^{2}} ,\mathop {\lim }\limits_{N \to 0} \sum\limits_{j = 0}^N {\frac{{\sigma _{j}^{2}}}{m}} } \right) = \max \left( {0,\frac{{\sigma _{0}^{2}}}{m}} \right)\; \geqslant \;{{C}_{{10}}} > 0. \\ \end{gathered} $$

The proof of this theorem is complete.

B.2. The proof of Corollary 2. The desired result obviously follows from Theorem 4 (see formula (27)).

B.3. The proof of Theorem 5. Due to Theorem 4, Vm(N) can be represented as (23). Hence, it consists of two terms:

—The first term is the series \(\sum\nolimits_{j = N + 1}^\infty {c_{j}^{2}} \), which converges by the convergence of the series \(\sum\nolimits_{j = 0}^\infty {c_{j}^{2}} \) = \(\left\| f \right\|_{{{{L}_{2}}({\text{K}},\Lambda )}}^{2}\) < ∞. Obviously, the sequence \({{\left\{ {\sum\nolimits_{j = N}^\infty {c_{j}^{2}} } \right\}}_{{N\; \geqslant \;0}}}\) is nonincreasing with increasing N, i.e., \(\sum\nolimits_{j = N + 1}^\infty {c_{j}^{2}} \) \(\leqslant \) \(\sum\nolimits_{j = N}^\infty {c_{j}^{2}} \); as a result,

$$\mathop {\lim }\limits_{N \to \infty } \sum\limits_{j = N}^\infty {c_{j}^{2} = 0.} $$

—The second term is the convergent nondecreasing sequence \({{\left\{ {\sum\nolimits_{j = 0}^N {\sigma _{j}^{2}} } \right\}}_{{N\; \geqslant \;0}}}\left( {\sum\nolimits_{j = 1}^\infty {\sigma _{j}^{2}} = {{\sigma }^{2}} < \infty } \right)\). Therefore, for each m\({{\mathbb{Z}}^{ + }}\)\0, we have the sets

$$A_{m}^{1} \triangleq \left\{ {j \in {{{\bar {\mathbb{Z}}}}^{ + }}:\frac{{\sigma _{j}^{2}}}{m} - c_{j}^{2}\; \geqslant \;0} \right\} \ne \varnothing ,$$
$$A_{m}^{2} \triangleq \left\{ {j \in {{{\bar {\mathbb{Z}}}}^{ + }}:\frac{{\sigma _{j}^{2}}}{m} - c_{j}^{2}\;\leqslant \;0} \right\} \ne \varnothing .$$

If N\(A_{m}^{1}\) (N\(A_{m}^{2}\)), Corollary 2 implies the inequality

$${{V}_{m}}(N + 1)\; \geqslant \;{{V}_{m}}(N)$$
$$({{V}_{m}}(N - 1)\; \geqslant \;{{V}_{m}}(N),\;\;{\text{respectively)}}{\text{.}}$$

Obviously, \(A_{m}^{1} \cap A_{m}^{2} \ne \varnothing \) and there exists a number N0(m) ∈ \({{\mathbb{Z}}^{ + }}\) such that \(A_{m}^{1} \cap A_{m}^{2}\) = {N0(m)}. This result immediately leads to (31). The proof of this theorem is complete.

B.4. The proof of Theorem 6. According to the proof of Theorem 3, for any m\({{\mathbb{Z}}^{ + }}\)\0 and N\({{\bar {\mathbb{Z}}}^{ + }}\), the standard deviation of the estimator \(\hat {f}_{{m,N}}^{0}(x)\)\({{\mathbb{M}}_{{2,m}}}(\tilde {\textsf{P}})\) is given by (B.3). Due to Theorem 5, there exists a function N0(m) = N0: (\({{\mathbb{Z}}^{ + }}\)\0) → \({{\mathbb{Z}}^{ + }}\) such that

$${{V}_{m}}(N)\; \geqslant \;{{V}_{m}}({{N}^{0}}(m))$$

for each m\({{\mathbb{Z}}^{ + }}\)\0 and any N\({{\mathbb{Z}}^{ + }}\).

Let us denote

$${{{\mathbf{1}}}_{{\left\{ {N\; \geqslant \;{{N}^{0}}(m)} \right\}}}} \triangleq \left\{ \begin{gathered} 1,\quad N\; \geqslant \;{{N}^{0}}(m) \hfill \\ 0,\quad N < {{N}^{0}}(m). \hfill \\ \end{gathered} \right.$$
(B.4)

According to the proof of Theorem 5, N \( \geqslant \) N0(m) if and only if  \(\sum\nolimits_{j = 0}^N {\frac{{\sigma _{j}^{2}}}{m}} \; \geqslant \;\sum\nolimits_{j = N + 1}^\infty {c_{j}^{2}} \). Therefore,

$${{{\mathbf{1}}}_{{\left\{ {N\; \geqslant \;{{N}^{0}}(m)} \right\}}}} = {{{\mathbf{1}}}_{{\left\{ {N \in {{\mathbb{Z}}^{ + }}:\sum\limits_{j = 0}^N {\frac{{\sigma _{j}^{2}}}{m}} \; \geqslant \;\sum\limits_{j = N + 1}^\infty {c_{j}^{2}} } \right\}}}}.$$

Let us denote

$${{\ell }_{m}}(N) \triangleq \left( {\sum\limits_{j = 0}^N {\frac{{\sigma _{j}^{2}}}{m}} \,\, - \sum\limits_{j = N + 1}^\infty {c_{j}^{2}} } \right){{{\mathbf{1}}}_{{\left\{ {N \in {{\mathbb{Z}}^{ + }}:\sum\limits_{j = 0}^N {\frac{{\sigma _{j}^{2}}}{m}} \; \geqslant \;\sum\limits_{j = N + 1}^\infty {c_{j}^{2}} } \right\}}}}.$$
(B.5)

Obviously, for each m\({{\mathbb{Z}}^{ + }}\)\0 and any N\({{\mathbb{Z}}^{ + }}\),

$${{\ell }_{m}}(N)\; \geqslant \;0.$$
(B.6)

From (B.5) and (B.6) it follows that \({{\ell }_{m}}(N)\) can be represented as

$${{\ell }_{m}}(N) = \max \left( {\sum\limits_{j = 0}^N {\frac{{\sigma _{j}^{2}}}{m}} - \sum\limits_{j = N + 1}^\infty {c_{j}^{2},\,\,0} } \right).$$
(B.7)

The graphs of the functions Vm(N) and \({{\ell }_{m}}(N)\) for each m\({{\mathbb{Z}}^{ + }}\)\0 and any N\({{\mathbb{Z}}^{ + }}\) demonstrate the properties:

(i)

$${{V}_{m}}(N)\; \geqslant \;{{\ell }_{m}}(N),$$
(B.8)

(ii)

$${{N}^{0}}(m) = \mathop {\operatorname{argmin} }\limits_{N \in {{{\bar {\mathbb{Z}}}}^{ + }}} {{V}_{m}}(N) = \mathop {\operatorname{argmin} }\limits_{N \in {{{\bar {\mathbb{Z}}}}^{ + }}} {{\ell }_{m}}(N).$$
(B.9)

From (B.7) and (B.9) we finally arrive at the assertion of Theorem 6. The proof of this theorem is complete.

B.5. The proof of Theorem 7. First, Theorem 4, Corollary 2, and (41) imply the representation

$${{V}_{m}}({{N}^{0}}(m)) = {{\left. {{{V}_{m}}(N)} \right|}_{{N = {{N}^{0}}(m)}}} = \sum\limits_{j = {{N}^{0}}(m) + 1}^\infty {c_{j}^{2}} + \sum\limits_{j = 0}^{{{N}^{0}}(m)} {\frac{{\sigma _{j}^{2}}}{m}.} $$
(B.10)

From (B.10) we obtain the inequality

$${{V}_{m}}({{N}^{0}}(m))\; \geqslant \;\sum\limits_{j = 0}^{{{N}^{0}}(m)} {\frac{{\sigma _{j}^{2}}}{m},} $$
(B.11)

expressing a lower bound for Vm(N0(m)).

Due to Theorem 6,

$$\sum\limits_{j = 0}^{{{N}^{0}}(m)} {\frac{{\sigma _{j}^{2}}}{m}} \; \geqslant \;\sum\limits_{j = {{N}^{0}}(m) + 1}^\infty {c_{j}^{2}} $$
(B.12)

for any m\({{\mathbb{Z}}^{ + }}\)\0.

Therefore, (B.10) and (B.12) lead to

$${{V}_{m}}({{N}^{0}}(m))\;\leqslant \;2\sum\limits_{j = 0}^{{{N}^{0}}(m)} {\frac{{\sigma _{j}^{2}}}{m}.} $$
(B.13)

The desired result finally follows from inequalities (B.11) and (B.13):

$${{V}_{m}}({{N}^{0}}(m)) \asymp \sum\limits_{j = 0}^{{{N}^{0}}(m)} {\frac{{\sigma _{j}^{2}}}{m}.} $$

The proof of this theorem is complete.

B.6. The proof of Corollary 3. It is immediate from conditions (i) and (ii) of the corollary and the proof of Theorem 7.

B.7. The proof of Theorem 9.

(1) From (40) it follows that

$$\tilde {f}_{m}^{0}(x) = \sum\limits_{j = 0}^{{{N}^{0}}(m)} {{{{\left[ {{{c}_{j}}{{\varphi }_{j}}(x) + \sum\limits_{k = 1}^m {n_{k}^{j}{{\varphi }_{j}}(x)} } \right]}}^{2}}} = {{f}_{{{{N}^{0}}(m)}}}(x) + \sum\limits_{j = 0}^{{{N}^{0}}(m)} {\frac{1}{m}\sum\limits_{k = 1}^m {n_{k}^{j}{{\varphi }_{j}}(x).} } $$

Taking the expectation of both sides of this equality yields

$$\textsf{E}{\kern 1pt} \tilde {f}_{m}^{0}(x) = {{f}_{{{{N}^{0}}(m)}}}(x) + \textsf{E}{\kern 1pt} \sum\limits_{j = 0}^{{{N}^{0}}(m)} {\frac{1}{m}\sum\limits_{k = 1}^m {n_{k}^{j}{{\varphi }_{j}}(x)} } = {{f}_{{{{N}^{0}}(m)}}}(x) \ne f(x).$$
(B.14)

Thus, the estimator (40) is biased.

(2) For proving the second assertion of this theorem, we have to show inequality (44). According to Theorem 4,

$${{V}_{m}}({{N}^{0}}(m))\;\leqslant \;{{V}_{m}}(N)$$

for any N \( \geqslant \) N0(m). Passing to the limit as N → ∞ gives

$${{V}_{m}}({{N}^{0}}(m))\;\leqslant \;\mathop {\lim }\limits_{N \to \infty } {{V}_{m}}(N) = {{V}_{m}}(\infty ).$$

(3) Next, we establish the third assertion of Theorem 9. Let us consider (40) and pass to the limit as m → ∞. For almost all x ∈ K, we obtain

$$\mathop {\lim }\limits_{m \to \infty } \textsf{E}{\kern 1pt} \tilde {f}_{m}^{0}(x) = \mathop {\lim }\limits_{m \to \infty } {{f}_{{{{N}^{0}}(m)}}}(x).$$

By item (ii) of Corollary 3, N0(m) \(\xrightarrow[{m \to \infty }]{}\) ∞. Hence,

$$\mathop {\lim }\limits_{m \to \infty } {{f}_{{{{N}^{0}}(m)}}}(x) = f(x).$$

This means that the estimator (40) is asymptotically unbiased.

(4) Finally, we demonstrate the consistency of the estimator (40). Due to the Chebyshev inequality, for any m\({{\mathbb{Z}}^{ + }}\)\0 and ε > 0,

$$\textsf{P}\left( {{{{\left| {\tilde {f}_{m}^{0}(x) - f(x)} \right|}}^{2}}\; \geqslant \;\varepsilon } \right)\;\leqslant \;\frac{1}{\varepsilon }\textsf{E}{\kern 1pt} {{\left| {\tilde {f}_{m}^{0}(x) - f(x)} \right|}^{2}}.$$
(B.15)

We analyze the right-hand side of inequality (43). From (40) it follows that

$$\tilde {f}_{m}^{0}(x) - f(x) = - \sum\limits_{j = {{N}^{0}}(m) + 1}^\infty {{{c}_{j}}{{\varphi }_{j}}(x)} + \frac{1}{m}\sum\limits_{j = 0}^{{{N}^{0}}(m)} {\sum\limits_{k = 1}^m {n_{k}^{j}{{\varphi }_{j}}(x).} } $$

Therefore, \(\textsf{E}{\kern 1pt} {{\left| {\tilde {f}_{m}^{0}(x) - f(x)} \right|}^{2}}\) can be represented as

$$\begin{gathered} \textsf{E}{\kern 1pt} {{\left| {\tilde {f}_{m}^{0}(x) - f(x)} \right|}^{2}} = \textsf{E}{\kern 1pt} {{\left| { - \sum\limits_{j = {{N}^{0}}(m) + 1}^\infty {{{c}_{j}}{{\varphi }_{j}}(x)} + \frac{1}{m}\sum\limits_{j = 0}^{{{N}^{0}}(m)} {\sum\limits_{k = 1}^m {n_{k}^{j}{{\varphi }_{j}}(x)} } } \right|}^{2}} \\ = {{\left| {\sum\limits_{j = {{N}^{0}}(m) + 1}^\infty {{{c}_{j}}{{\varphi }_{j}}(x)} } \right|}^{2}} + \frac{1}{m}\sum\limits_{j = 0}^{{{N}^{0}}(m)} {\sigma _{j}^{2}\varphi _{j}^{2}(x).} \\ \end{gathered} $$
(B.16)

According to Corollary 3 and the conditions of this theorem, the series \(\left| {\sum\limits_{j = {{N}^{0}}(m) + 1}^\infty {{{c}_{j}}{{\varphi }_{j}}(x)} } \right|\) and \(\sum\limits_{j = 0}^{{{N}^{0}}(m)} {\sigma _{j}^{2}\varphi _{j}^{2}(x)} \) are convergent for almost all x ∈ K. As a result, we have

$${{\left| {\sum\limits_{j = {{N}^{0}}(m) + 1}^\infty {{{c}_{j}}{{\varphi }_{j}}(x)} } \right|}^{2}}\;\xrightarrow[{m \to \infty }]{}\;0,$$
$$\frac{1}{m}\sum\limits_{j = 0}^{{{N}^{0}}(m)} {\sigma _{j}^{2}\varphi _{j}^{2}(x)} \;\xrightarrow[{m \to \infty }]{}\;0.$$

Consequently, for any ε > 0,

$$\mathop {\lim }\limits_{m \to \infty } \textsf{P}\left( {{{{\left| {\tilde {f}_{m}^{0}(x) - f(x)} \right|}}^{2}}\; \geqslant \;\varepsilon } \right) = 0.$$

The proof of this theorem is complete.

B.8. The proof of Theorem 10. According to the proof of Theorem 7, the standard deviation of the CPE satisfies the inequalities

$$2\sum\limits_{j = 0}^{{{N}^{0}}(m)} {\frac{{\sigma _{j}^{2}}}{m}} \; \geqslant \;{{V}_{m}}({{N}^{0}}(m))\; \geqslant \;\sum\limits_{j = 0}^{{{N}^{0}}(m)} {\frac{{\sigma _{j}^{2}}}{m}.} $$

From Theorem 4, Corollary 2, and Theorems 5 and 6 we have the inequality

$$\begin{gathered} {{V}_{m}}({{N}^{0}}(m)) = \mathop {\inf }\limits_{N \in {{\mathbb{Z}}^{ + }}} \mathop {\inf }\limits_{{{f}_{{m,N}}}(x) \in {{\mathbb{M}}_{{2,m}}}(\tilde {\textsf{P}})} \textsf{E}\int\limits_{\text{K}} {{{{\left[ {f(x) - {{f}_{{N,m}}}(x)} \right]}}^{2}}dx} \\ = \sum\limits_{j = 0}^{{{N}^{0}}(m)} {\frac{{\sigma _{j}^{2}}}{m}} + \sum\limits_{j = {{N}^{0}}(m) + 1}^\infty {c_{j}^{2}} \;\leqslant \;2\sum\limits_{j = 0}^{{{N}^{0}}(m)} {\frac{{\sigma _{j}^{2}}}{m}} \;\leqslant \;2\sum\limits_{j = 0}^\infty {\frac{{\sigma _{j}^{2}}}{m}} = \frac{{2{{\sigma }^{2}}}}{m}. \\ \end{gathered} $$
(B.17)

Since the right-hand side of (B.17) is independent of f(x) ∈ L2(K, Λ), the desired conclusion follows. The proof of this theorem is complete.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bulgakov, S.A., Khametov, V.M. Optimal Recovery of a Square Integrable Function from Its Observations with Gaussian Errors. Autom Remote Control 84, 369–388 (2023). https://doi.org/10.1134/S0005117923040033

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1134/S0005117923040033

Keywords:

Navigation