Modeling stimulus-driven attentional selection in dynamic natural scenes

Anna Lázár, Z. Vidnyánszky, T. Roska

Research output: Contribution to journalArticle

Abstract

In this paper we have developed a neuromorphic model of bottom-up (BU) visual attentional selection. The output of a recently developed neuromorphic multi-channel retina model has represented the input of our model. As a first step, a saliency map has been calculated for each retinal channel which, next, has been integrated into a master saliency map. Model parameters have been optimized based on human eye movement data measured during viewing dynamic natural scenes. We have tested two different strategies for weighting the channel-specific saliency maps during integration into a master map. In the first case, channel weights have been kept constant throughout the verification measurements, whereas, in the other case, they have been updated on each frame, according to the specific properties of the visual input. Surprisingly, the constant channel weighting strategies have performed better than the continually updated ones. We have measured the model's accuracy by defining the hit ratio (concurrence) between the first few predicted locations (the most salient locations) and the measured fixation locations. Constant weighting methods have achieved ∼74% hit ratio on four predictions. For a comparison, the accidental chance for this case has been less than 20%. This pure BU approach has performed surprisingly well on dynamic natural input. Some practical applications have already been made with task-dependent simplifications.

Original languageEnglish
Pages (from-to)3-30
Number of pages28
JournalInternational Journal of Circuit Theory and Applications
Volume37
Issue number1
DOIs
Publication statusPublished - Feb 2009

Fingerprint

Saliency Map
Weighting
Modeling
Bottom-up
Hits
Concurrence
Eye Movements
Retina
Model
Eye movements
Fixation
Simplification
Dependent
Prediction
Output
Vision
Strategy

Keywords

  • Eye movements
  • Neuromorphic modeling
  • Receptive fields
  • Retina channels
  • Saliency
  • Visual attention

ASJC Scopus subject areas

  • Electrical and Electronic Engineering
  • Electronic, Optical and Magnetic Materials
  • Computer Science Applications
  • Applied Mathematics

Cite this

Modeling stimulus-driven attentional selection in dynamic natural scenes. / Lázár, Anna; Vidnyánszky, Z.; Roska, T.

In: International Journal of Circuit Theory and Applications, Vol. 37, No. 1, 02.2009, p. 3-30.

Research output: Contribution to journalArticle

@article{9519a8a3850d442cae30cf675f466666,
title = "Modeling stimulus-driven attentional selection in dynamic natural scenes",
abstract = "In this paper we have developed a neuromorphic model of bottom-up (BU) visual attentional selection. The output of a recently developed neuromorphic multi-channel retina model has represented the input of our model. As a first step, a saliency map has been calculated for each retinal channel which, next, has been integrated into a master saliency map. Model parameters have been optimized based on human eye movement data measured during viewing dynamic natural scenes. We have tested two different strategies for weighting the channel-specific saliency maps during integration into a master map. In the first case, channel weights have been kept constant throughout the verification measurements, whereas, in the other case, they have been updated on each frame, according to the specific properties of the visual input. Surprisingly, the constant channel weighting strategies have performed better than the continually updated ones. We have measured the model's accuracy by defining the hit ratio (concurrence) between the first few predicted locations (the most salient locations) and the measured fixation locations. Constant weighting methods have achieved ∼74{\%} hit ratio on four predictions. For a comparison, the accidental chance for this case has been less than 20{\%}. This pure BU approach has performed surprisingly well on dynamic natural input. Some practical applications have already been made with task-dependent simplifications.",
keywords = "Eye movements, Neuromorphic modeling, Receptive fields, Retina channels, Saliency, Visual attention",
author = "Anna L{\'a}z{\'a}r and Z. Vidny{\'a}nszky and T. Roska",
year = "2009",
month = "2",
doi = "10.1002/cta.469",
language = "English",
volume = "37",
pages = "3--30",
journal = "International Journal of Circuit Theory and Applications",
issn = "0098-9886",
publisher = "John Wiley and Sons Ltd",
number = "1",

}

TY - JOUR

T1 - Modeling stimulus-driven attentional selection in dynamic natural scenes

AU - Lázár, Anna

AU - Vidnyánszky, Z.

AU - Roska, T.

PY - 2009/2

Y1 - 2009/2

N2 - In this paper we have developed a neuromorphic model of bottom-up (BU) visual attentional selection. The output of a recently developed neuromorphic multi-channel retina model has represented the input of our model. As a first step, a saliency map has been calculated for each retinal channel which, next, has been integrated into a master saliency map. Model parameters have been optimized based on human eye movement data measured during viewing dynamic natural scenes. We have tested two different strategies for weighting the channel-specific saliency maps during integration into a master map. In the first case, channel weights have been kept constant throughout the verification measurements, whereas, in the other case, they have been updated on each frame, according to the specific properties of the visual input. Surprisingly, the constant channel weighting strategies have performed better than the continually updated ones. We have measured the model's accuracy by defining the hit ratio (concurrence) between the first few predicted locations (the most salient locations) and the measured fixation locations. Constant weighting methods have achieved ∼74% hit ratio on four predictions. For a comparison, the accidental chance for this case has been less than 20%. This pure BU approach has performed surprisingly well on dynamic natural input. Some practical applications have already been made with task-dependent simplifications.

AB - In this paper we have developed a neuromorphic model of bottom-up (BU) visual attentional selection. The output of a recently developed neuromorphic multi-channel retina model has represented the input of our model. As a first step, a saliency map has been calculated for each retinal channel which, next, has been integrated into a master saliency map. Model parameters have been optimized based on human eye movement data measured during viewing dynamic natural scenes. We have tested two different strategies for weighting the channel-specific saliency maps during integration into a master map. In the first case, channel weights have been kept constant throughout the verification measurements, whereas, in the other case, they have been updated on each frame, according to the specific properties of the visual input. Surprisingly, the constant channel weighting strategies have performed better than the continually updated ones. We have measured the model's accuracy by defining the hit ratio (concurrence) between the first few predicted locations (the most salient locations) and the measured fixation locations. Constant weighting methods have achieved ∼74% hit ratio on four predictions. For a comparison, the accidental chance for this case has been less than 20%. This pure BU approach has performed surprisingly well on dynamic natural input. Some practical applications have already been made with task-dependent simplifications.

KW - Eye movements

KW - Neuromorphic modeling

KW - Receptive fields

KW - Retina channels

KW - Saliency

KW - Visual attention

UR - http://www.scopus.com/inward/record.url?scp=59749096580&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=59749096580&partnerID=8YFLogxK

U2 - 10.1002/cta.469

DO - 10.1002/cta.469

M3 - Article

AN - SCOPUS:59749096580

VL - 37

SP - 3

EP - 30

JO - International Journal of Circuit Theory and Applications

JF - International Journal of Circuit Theory and Applications

SN - 0098-9886

IS - 1

ER -