Learning with regularizers in multilayer neural networks

David Saad and Magnus Rattray
Phys. Rev. E 57, 2170 – Published 1 February 1998
PDFExport Citation

Abstract

We study the effect of regularization in an on-line gradient-descent learning scenario for a general two-layer student network with an arbitrary number of hidden units. Training examples are randomly drawn input vectors labeled by a two-layer teacher network with an arbitrary number of hidden units that may be corrupted by Gaussian output noise. We examine the effect of weight decay regularization on the dynamical evolution of the order parameters and generalization error in various phases of the learning process, in both noiseless and noisy scenarios.

  • Received 29 September 1997

DOI:https://doi.org/10.1103/PhysRevE.57.2170

©1998 American Physical Society

Authors & Affiliations

David Saad and Magnus Rattray

  • Department of Computer Science and Applied Mathematics, Aston University, Birmingham, B4 7ET, United Kingdom

References (Subscription Required)

Click to Expand
Issue

Vol. 57, Iss. 2 — February 1998

Reuse & Permissions
Access Options
Author publication services for translation and copyediting assistance advertisement

Authorization Required


×
×

Images

×

Sign up to receive regular email alerts from Physical Review E

Log In

Cancel
×

Search


Article Lookup

Paste a citation or DOI

Enter a citation
×