Comptes Rendus
Dynamical systems/Probability theory
Asymptotic description of stochastic neural networks. II. Characterization of the limit law
[Description asymptotique de réseaux de neurones stochastiques. II. Caractérisation de la loi limite]
Comptes Rendus. Mathématique, Volume 352 (2014) no. 10, pp. 847-852.

Nous prolongeons le développement, commencé en [8], de la description asymptotique de certains réseaux de neurones stochastiques. Nous utilisons le principe de grandes déviations (PGD) et la bonne fonction de taux H que nous y annoncions pour démontrer l'existence d'un unique minimimum, μe, de H, une mesure stationnaire sur l'ensemble TZ des trajectoires. Nous caractérisons cette mesure par ses deux maginales, à l'instant 0, et du temps 1 au temps T. La seconde marginale est une mesure gaussienne stationnaire. Avec un oeil sur les applications, nous montrons comment calculer de manière inductive sa moyenne et son opérateur de covariance. Nous montrons aussi comment utiliser le PGD pour établir des résultats de convergence en moyenne et presque sûrement.

We continue the development, started in [8], of the asymptotic description of certain stochastic neural networks. We use the Large Deviation Principle (LDP) and the good rate function H announced there to prove that H has a unique minimum μe, a stationary measure on the set of trajectories TZ. We characterize this measure by its two marginals, at time 0, and from time 1 to T. The second marginal is a stationary Gaussian measure. With an eye on applications, we show that its mean and covariance operator can be inductively computed. Finally, we use the LDP to establish various convergence results, averaged, and quenched.

Reçu le :
Accepté le :
Publié le :
DOI : 10.1016/j.crma.2014.08.017
Olivier Faugeras 1 ; James Maclaurin 1

1 Inria Sophia-Antipolis Méditerranée, NeuroMathComp Group, France
@article{CRMATH_2014__352_10_847_0,
     author = {Olivier Faugeras and James Maclaurin},
     title = {Asymptotic description of stochastic neural networks. {II.} {Characterization} of the limit law},
     journal = {Comptes Rendus. Math\'ematique},
     pages = {847--852},
     publisher = {Elsevier},
     volume = {352},
     number = {10},
     year = {2014},
     doi = {10.1016/j.crma.2014.08.017},
     language = {en},
}
TY  - JOUR
AU  - Olivier Faugeras
AU  - James Maclaurin
TI  - Asymptotic description of stochastic neural networks. II. Characterization of the limit law
JO  - Comptes Rendus. Mathématique
PY  - 2014
SP  - 847
EP  - 852
VL  - 352
IS  - 10
PB  - Elsevier
DO  - 10.1016/j.crma.2014.08.017
LA  - en
ID  - CRMATH_2014__352_10_847_0
ER  - 
%0 Journal Article
%A Olivier Faugeras
%A James Maclaurin
%T Asymptotic description of stochastic neural networks. II. Characterization of the limit law
%J Comptes Rendus. Mathématique
%D 2014
%P 847-852
%V 352
%N 10
%I Elsevier
%R 10.1016/j.crma.2014.08.017
%G en
%F CRMATH_2014__352_10_847_0
Olivier Faugeras; James Maclaurin. Asymptotic description of stochastic neural networks. II. Characterization of the limit law. Comptes Rendus. Mathématique, Volume 352 (2014) no. 10, pp. 847-852. doi : 10.1016/j.crma.2014.08.017. https://comptes-rendus.academie-sciences.fr/mathematique/articles/10.1016/j.crma.2014.08.017/

[1] P. Bressloff Stochastic neural field theory and the system-size expansion, SIAM J. Appl. Math., Volume 70 (2009), pp. 1488-1521

[2] M. Buice; J. Cowan Field-theoretic approach to fluctuation effects in neural networks, Phys. Rev. E, Volume 75 (2007)

[3] M. Buice; J. Cowan; C. Chow Systematic fluctuation expansion for neural network activity equations, Neural Comput., Volume 22 (2010), pp. 377-426

[4] T. Chiyonobu; S. Kusuoka The large deviation principle for hypermixing processes, Probab. Theory Relat. Fields, Volume 78 (1988), pp. 627-649

[5] S. ElBoustani; A. Destexhe A master equation formalism for macroscopic modeling of asynchronous irregular activity states, Neural Comput., Volume 21 (2009), pp. 46-100

[6] O. Faugeras; J. MacLaurin Large deviations of an ergodic synchronous neural network with learning, 2014 (arXiv depot, INRIA Sophia Antipolis, France) | arXiv

[7] O. Faugeras; J. Maclaurin Asymptotic description of neural networks with correlated synaptic weights, INRIA, March 2014 (Rapport de recherche RR-8495)

[8] O. Faugeras; J. Maclaurin Asymptotic description of stochastic neural networks. I. Existence of a large deviation principle, C. R. Acad. Sci. Paris, Ser. I, Volume 352 (2014) no. 10, pp. 841-846

[9] I. Ginzburg; H. Sompolinsky Theory of correlations in stochastic neural networks, Phys. Rev. E, Volume 50 (1994)

[10] O. Moynot Étude mathématique de la dynamique des réseaux neuronaux aléatoires récurrents, Université Paul-Sabatier, Toulouse, 1999 (Ph.D. thesis)

Cité par Sources :

Commentaires - Politique


Ces articles pourraient vous intéresser

Asymptotic description of stochastic neural networks. I. Existence of a large deviation principle

Olivier Faugeras; James Maclaurin

C. R. Math (2014)