Implementing a retinal visual language in CNN: A neuromorphic study

F. Werblin, B. Roska, D. Bálya, C. Rekeczky, T. Roska

Research output: Chapter in Book/Report/Conference proceedingConference contribution

14 Citations (Scopus)

Abstract

The retina sends to the brain a parallel set of about a dozen different space-time representations of the visual world. Each of these representations is generated by a distinct set of "feature detecting" transformations. These features most likely contain all the information we need and use to analyze and interpret the visual world. They constitute a fundamental visual language that is elaborated upon at higher centers in the brain. A multi-layer CNN is presented for mimicking this new retinal model. The model is composed of several prototype 3-layer CNN units, called Complex R-units. Surfaces of activity are represented by CNN layers. Various parameter sets represent the different parts of the multi-layer retinal model. The whole model can be described by a visual language with elementary instructions of a CNN Universal Machine containing the programmable Complex R-units. Decomposition methods in time and space are discussed.

Original languageEnglish
Title of host publicationProceedings - IEEE International Symposium on Circuits and Systems
Pages333-336
Number of pages4
Volume3
Publication statusPublished - 2001
EventIEEE International Symposium on Circuits and Systems (ISCAS 2001) - Sydney, NSW, Australia
Duration: May 6 2001May 9 2001

Other

OtherIEEE International Symposium on Circuits and Systems (ISCAS 2001)
CountryAustralia
CitySydney, NSW
Period5/6/015/9/01

Fingerprint

Visual languages
Brain
Decomposition
complex R

ASJC Scopus subject areas

  • Electrical and Electronic Engineering
  • Electronic, Optical and Magnetic Materials
  • Hardware and Architecture

Cite this

Werblin, F., Roska, B., Bálya, D., Rekeczky, C., & Roska, T. (2001). Implementing a retinal visual language in CNN: A neuromorphic study. In Proceedings - IEEE International Symposium on Circuits and Systems (Vol. 3, pp. 333-336)

Implementing a retinal visual language in CNN : A neuromorphic study. / Werblin, F.; Roska, B.; Bálya, D.; Rekeczky, C.; Roska, T.

Proceedings - IEEE International Symposium on Circuits and Systems. Vol. 3 2001. p. 333-336.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Werblin, F, Roska, B, Bálya, D, Rekeczky, C & Roska, T 2001, Implementing a retinal visual language in CNN: A neuromorphic study. in Proceedings - IEEE International Symposium on Circuits and Systems. vol. 3, pp. 333-336, IEEE International Symposium on Circuits and Systems (ISCAS 2001), Sydney, NSW, Australia, 5/6/01.
Werblin F, Roska B, Bálya D, Rekeczky C, Roska T. Implementing a retinal visual language in CNN: A neuromorphic study. In Proceedings - IEEE International Symposium on Circuits and Systems. Vol. 3. 2001. p. 333-336
Werblin, F. ; Roska, B. ; Bálya, D. ; Rekeczky, C. ; Roska, T. / Implementing a retinal visual language in CNN : A neuromorphic study. Proceedings - IEEE International Symposium on Circuits and Systems. Vol. 3 2001. pp. 333-336
@inproceedings{27ea58bbca814a01af77838cabe01e8c,
title = "Implementing a retinal visual language in CNN: A neuromorphic study",
abstract = "The retina sends to the brain a parallel set of about a dozen different space-time representations of the visual world. Each of these representations is generated by a distinct set of {"}feature detecting{"} transformations. These features most likely contain all the information we need and use to analyze and interpret the visual world. They constitute a fundamental visual language that is elaborated upon at higher centers in the brain. A multi-layer CNN is presented for mimicking this new retinal model. The model is composed of several prototype 3-layer CNN units, called Complex R-units. Surfaces of activity are represented by CNN layers. Various parameter sets represent the different parts of the multi-layer retinal model. The whole model can be described by a visual language with elementary instructions of a CNN Universal Machine containing the programmable Complex R-units. Decomposition methods in time and space are discussed.",
author = "F. Werblin and B. Roska and D. B{\'a}lya and C. Rekeczky and T. Roska",
year = "2001",
language = "English",
volume = "3",
pages = "333--336",
booktitle = "Proceedings - IEEE International Symposium on Circuits and Systems",

}

TY - GEN

T1 - Implementing a retinal visual language in CNN

T2 - A neuromorphic study

AU - Werblin, F.

AU - Roska, B.

AU - Bálya, D.

AU - Rekeczky, C.

AU - Roska, T.

PY - 2001

Y1 - 2001

N2 - The retina sends to the brain a parallel set of about a dozen different space-time representations of the visual world. Each of these representations is generated by a distinct set of "feature detecting" transformations. These features most likely contain all the information we need and use to analyze and interpret the visual world. They constitute a fundamental visual language that is elaborated upon at higher centers in the brain. A multi-layer CNN is presented for mimicking this new retinal model. The model is composed of several prototype 3-layer CNN units, called Complex R-units. Surfaces of activity are represented by CNN layers. Various parameter sets represent the different parts of the multi-layer retinal model. The whole model can be described by a visual language with elementary instructions of a CNN Universal Machine containing the programmable Complex R-units. Decomposition methods in time and space are discussed.

AB - The retina sends to the brain a parallel set of about a dozen different space-time representations of the visual world. Each of these representations is generated by a distinct set of "feature detecting" transformations. These features most likely contain all the information we need and use to analyze and interpret the visual world. They constitute a fundamental visual language that is elaborated upon at higher centers in the brain. A multi-layer CNN is presented for mimicking this new retinal model. The model is composed of several prototype 3-layer CNN units, called Complex R-units. Surfaces of activity are represented by CNN layers. Various parameter sets represent the different parts of the multi-layer retinal model. The whole model can be described by a visual language with elementary instructions of a CNN Universal Machine containing the programmable Complex R-units. Decomposition methods in time and space are discussed.

UR - http://www.scopus.com/inward/record.url?scp=0034999114&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0034999114&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:0034999114

VL - 3

SP - 333

EP - 336

BT - Proceedings - IEEE International Symposium on Circuits and Systems

ER -