Abstract
A simple local learning rule is presented for neural-network models of N two-state neurons. This rule introduces a free parameter k∈(1,4]. We prove that the synaptic matrix converges exponentially to a projection one onto the subspace spanned by the prototypes. This learning mechanism allows embedding without error sets of arbitrary correlated patterns (linearly independent or not), provided that each pattern is presented at most n times, where n is the smallest integer greater than N.
- Received 14 June 1990
DOI:https://doi.org/10.1103/PhysRevLett.66.1793
©1991 American Physical Society