Generalization in the programed teaching of a perceptron

Research output: Contribution to journalArticle

6 Citations (Scopus)

Abstract

According to a widely used model of learning and generalization in neural networks, a single neuron (perceptron) can learn from examples to imitate another neuron, called the teacher perceptron. We introduce a variant of this model in which examples within a layer of thickness 2Y around the decision surface are excluded from teaching. That restriction transmits global information about the teachers rule. Therefore for a given number p=N of presented examples (i.e., those outside of the layer) the generalization performance obtained by Boltzmannian learning is improved by setting Y to an optimum value Y0(), which diverges for 0 and remains nonzero while <c5.7. That suggests programed learning: easy examples should be taught first.

Original languageEnglish
Pages (from-to)3192-3200
Number of pages9
JournalPhysical Review E
Volume50
Issue number4
DOIs
Publication statusPublished - Jan 1 1994

    Fingerprint

ASJC Scopus subject areas

  • Statistical and Nonlinear Physics
  • Statistics and Probability
  • Condensed Matter Physics

Cite this