Generalization in the programed teaching of a perceptron

I. Derényi, Tams Geszti, G. Györgyi

Research output: Contribution to journalArticle

6 Citations (Scopus)

Abstract

According to a widely used model of learning and generalization in neural networks, a single neuron (perceptron) can learn from examples to imitate another neuron, called the teacher perceptron. We introduce a variant of this model in which examples within a layer of thickness 2Y around the decision surface are excluded from teaching. That restriction transmits global information about the teachers rule. Therefore for a given number p=N of presented examples (i.e., those outside of the layer) the generalization performance obtained by Boltzmannian learning is improved by setting Y to an optimum value Y0(), which diverges for 0 and remains nonzero while

Original languageEnglish
Pages (from-to)3192-3200
Number of pages9
JournalPhysical Review E - Statistical, Nonlinear, and Soft Matter Physics
Volume50
Issue number4
DOIs
Publication statusPublished - 1994

Fingerprint

self organizing systems
instructors
Perceptron
neurons
learning
Neuron
education
Diverge
constrictions
Neural Networks
Restriction
Model
Generalization
Learning
Teaching

ASJC Scopus subject areas

  • Mathematical Physics
  • Physics and Astronomy(all)
  • Condensed Matter Physics
  • Statistical and Nonlinear Physics

Cite this

Generalization in the programed teaching of a perceptron. / Derényi, I.; Geszti, Tams; Györgyi, G.

In: Physical Review E - Statistical, Nonlinear, and Soft Matter Physics, Vol. 50, No. 4, 1994, p. 3192-3200.

Research output: Contribution to journalArticle

@article{8c5d66bdb79d4fb09fee368cdb0a91ec,
title = "Generalization in the programed teaching of a perceptron",
abstract = "According to a widely used model of learning and generalization in neural networks, a single neuron (perceptron) can learn from examples to imitate another neuron, called the teacher perceptron. We introduce a variant of this model in which examples within a layer of thickness 2Y around the decision surface are excluded from teaching. That restriction transmits global information about the teachers rule. Therefore for a given number p=N of presented examples (i.e., those outside of the layer) the generalization performance obtained by Boltzmannian learning is improved by setting Y to an optimum value Y0(), which diverges for 0 and remains nonzero while",
author = "I. Der{\'e}nyi and Tams Geszti and G. Gy{\"o}rgyi",
year = "1994",
doi = "10.1103/PhysRevE.50.3192",
language = "English",
volume = "50",
pages = "3192--3200",
journal = "Physical review. E",
issn = "2470-0045",
publisher = "American Physical Society",
number = "4",

}

TY - JOUR

T1 - Generalization in the programed teaching of a perceptron

AU - Derényi, I.

AU - Geszti, Tams

AU - Györgyi, G.

PY - 1994

Y1 - 1994

N2 - According to a widely used model of learning and generalization in neural networks, a single neuron (perceptron) can learn from examples to imitate another neuron, called the teacher perceptron. We introduce a variant of this model in which examples within a layer of thickness 2Y around the decision surface are excluded from teaching. That restriction transmits global information about the teachers rule. Therefore for a given number p=N of presented examples (i.e., those outside of the layer) the generalization performance obtained by Boltzmannian learning is improved by setting Y to an optimum value Y0(), which diverges for 0 and remains nonzero while

AB - According to a widely used model of learning and generalization in neural networks, a single neuron (perceptron) can learn from examples to imitate another neuron, called the teacher perceptron. We introduce a variant of this model in which examples within a layer of thickness 2Y around the decision surface are excluded from teaching. That restriction transmits global information about the teachers rule. Therefore for a given number p=N of presented examples (i.e., those outside of the layer) the generalization performance obtained by Boltzmannian learning is improved by setting Y to an optimum value Y0(), which diverges for 0 and remains nonzero while

UR - http://www.scopus.com/inward/record.url?scp=4243075127&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=4243075127&partnerID=8YFLogxK

U2 - 10.1103/PhysRevE.50.3192

DO - 10.1103/PhysRevE.50.3192

M3 - Article

AN - SCOPUS:4243075127

VL - 50

SP - 3192

EP - 3200

JO - Physical review. E

JF - Physical review. E

SN - 2470-0045

IS - 4

ER -