Gradient descent learning in and out of equilibrium

Nestor Caticha and Evaldo Araújo de Oliveira
Phys. Rev. E 63, 061905 – Published 24 May 2001
PDFExport Citation

Abstract

Relations between the off thermal equilibrium dynamical process of on-line learning and the thermally equilibrated off-line learning are studied for potential gradient descent learning. The approach of Opper to study on-line Bayesian algorithms is used for potential based or maximum likelihood learning. We look at the on-line learning algorithm that best approximates the off-line algorithm in the sense of least Kullback-Leibler information loss. The closest on-line algorithm works by updating the weights along the gradient of an effective potential, which is different from the parent off-line potential. A few examples are analyzed and the origin of the potential annealing is discussed.

  • Received 16 June 2000

DOI:https://doi.org/10.1103/PhysRevE.63.061905

©2001 American Physical Society

Authors & Affiliations

Nestor Caticha and Evaldo Araújo de Oliveira

  • Instituto de Física, Universidade de São Paulo, CP 66318 São Paulo, SP, CEP05389-970 Brazil

References (Subscription Required)

Click to Expand
Issue

Vol. 63, Iss. 6 — June 2001

Reuse & Permissions
Access Options
Author publication services for translation and copyediting assistance advertisement

Authorization Required


×
×

Images

×

Sign up to receive regular email alerts from Physical Review E

Log In

Cancel
×

Search


Article Lookup

Paste a citation or DOI

Enter a citation
×