Sparse and silent coding in neural circuits

A. Lőrincz, Zsolt Palotai, Gábor Szirtes

Research output: Contribution to journalArticle

4 Citations (Scopus)

Abstract

Sparse coding algorithms find a linear basis in which signals can be represented by a small number of non-zero coefficients. Such coding may play an important role in neural information processing and metabolically efficient natural solutions serve as an inspiration for algorithms employed in various areas of computer science. In particular, finding non-zero coefficients in overcomplete sparse coding is a computationally hard problem, for which different approximate solutions have been proposed. Methods that minimize the magnitude of the coefficients ('ℓ 1-norm') instead of minimizing the size of the active subset of features ('ℓ 0-norm') may find the optimal solutions, but they do not scale well with the problem size and use centralized algorithms. Iterative, greedy methods, on the other hand are fast, but require a priori knowledge of the number of non-zero features, often find suboptimal solutions and they converge to the final sparse form through a series of non-sparse representations. In this article we propose a neurally plausible algorithm which efficiently integrates an ℓ 0-norm based probabilistic sparse coding model with ideas inspired by novel iterative solutions. Furthermore, the resulting algorithm does not require an exactly defined sparseness level thus it is suitable for representing natural stimuli with a varying number of features. We demonstrate that our combined method can find optimal solutions in cases where other, ℓ 1-norm based algorithms already fail.

Original languageEnglish
Pages (from-to)115-124
Number of pages10
JournalNeurocomputing
Volume79
DOIs
Publication statusPublished - Mar 1 2012

Fingerprint

Networks (circuits)
Iterative methods
Automatic Data Processing
Computer science

Keywords

  • ℓ -Norm
  • Cross-entropy method
  • Sparse coding

ASJC Scopus subject areas

  • Artificial Intelligence
  • Computer Science Applications
  • Cognitive Neuroscience

Cite this

Sparse and silent coding in neural circuits. / Lőrincz, A.; Palotai, Zsolt; Szirtes, Gábor.

In: Neurocomputing, Vol. 79, 01.03.2012, p. 115-124.

Research output: Contribution to journalArticle

Lőrincz, A. ; Palotai, Zsolt ; Szirtes, Gábor. / Sparse and silent coding in neural circuits. In: Neurocomputing. 2012 ; Vol. 79. pp. 115-124.
@article{3f03d6a4569b49b7a78bdbaf2336d3b3,
title = "Sparse and silent coding in neural circuits",
abstract = "Sparse coding algorithms find a linear basis in which signals can be represented by a small number of non-zero coefficients. Such coding may play an important role in neural information processing and metabolically efficient natural solutions serve as an inspiration for algorithms employed in various areas of computer science. In particular, finding non-zero coefficients in overcomplete sparse coding is a computationally hard problem, for which different approximate solutions have been proposed. Methods that minimize the magnitude of the coefficients ('ℓ 1-norm') instead of minimizing the size of the active subset of features ('ℓ 0-norm') may find the optimal solutions, but they do not scale well with the problem size and use centralized algorithms. Iterative, greedy methods, on the other hand are fast, but require a priori knowledge of the number of non-zero features, often find suboptimal solutions and they converge to the final sparse form through a series of non-sparse representations. In this article we propose a neurally plausible algorithm which efficiently integrates an ℓ 0-norm based probabilistic sparse coding model with ideas inspired by novel iterative solutions. Furthermore, the resulting algorithm does not require an exactly defined sparseness level thus it is suitable for representing natural stimuli with a varying number of features. We demonstrate that our combined method can find optimal solutions in cases where other, ℓ 1-norm based algorithms already fail.",
keywords = "ℓ -Norm, Cross-entropy method, Sparse coding",
author = "A. Lőrincz and Zsolt Palotai and G{\'a}bor Szirtes",
year = "2012",
month = "3",
day = "1",
doi = "10.1016/j.neucom.2011.10.017",
language = "English",
volume = "79",
pages = "115--124",
journal = "Neurocomputing",
issn = "0925-2312",
publisher = "Elsevier",

}

TY - JOUR

T1 - Sparse and silent coding in neural circuits

AU - Lőrincz, A.

AU - Palotai, Zsolt

AU - Szirtes, Gábor

PY - 2012/3/1

Y1 - 2012/3/1

N2 - Sparse coding algorithms find a linear basis in which signals can be represented by a small number of non-zero coefficients. Such coding may play an important role in neural information processing and metabolically efficient natural solutions serve as an inspiration for algorithms employed in various areas of computer science. In particular, finding non-zero coefficients in overcomplete sparse coding is a computationally hard problem, for which different approximate solutions have been proposed. Methods that minimize the magnitude of the coefficients ('ℓ 1-norm') instead of minimizing the size of the active subset of features ('ℓ 0-norm') may find the optimal solutions, but they do not scale well with the problem size and use centralized algorithms. Iterative, greedy methods, on the other hand are fast, but require a priori knowledge of the number of non-zero features, often find suboptimal solutions and they converge to the final sparse form through a series of non-sparse representations. In this article we propose a neurally plausible algorithm which efficiently integrates an ℓ 0-norm based probabilistic sparse coding model with ideas inspired by novel iterative solutions. Furthermore, the resulting algorithm does not require an exactly defined sparseness level thus it is suitable for representing natural stimuli with a varying number of features. We demonstrate that our combined method can find optimal solutions in cases where other, ℓ 1-norm based algorithms already fail.

AB - Sparse coding algorithms find a linear basis in which signals can be represented by a small number of non-zero coefficients. Such coding may play an important role in neural information processing and metabolically efficient natural solutions serve as an inspiration for algorithms employed in various areas of computer science. In particular, finding non-zero coefficients in overcomplete sparse coding is a computationally hard problem, for which different approximate solutions have been proposed. Methods that minimize the magnitude of the coefficients ('ℓ 1-norm') instead of minimizing the size of the active subset of features ('ℓ 0-norm') may find the optimal solutions, but they do not scale well with the problem size and use centralized algorithms. Iterative, greedy methods, on the other hand are fast, but require a priori knowledge of the number of non-zero features, often find suboptimal solutions and they converge to the final sparse form through a series of non-sparse representations. In this article we propose a neurally plausible algorithm which efficiently integrates an ℓ 0-norm based probabilistic sparse coding model with ideas inspired by novel iterative solutions. Furthermore, the resulting algorithm does not require an exactly defined sparseness level thus it is suitable for representing natural stimuli with a varying number of features. We demonstrate that our combined method can find optimal solutions in cases where other, ℓ 1-norm based algorithms already fail.

KW - ℓ -Norm

KW - Cross-entropy method

KW - Sparse coding

UR - http://www.scopus.com/inward/record.url?scp=83955161106&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=83955161106&partnerID=8YFLogxK

U2 - 10.1016/j.neucom.2011.10.017

DO - 10.1016/j.neucom.2011.10.017

M3 - Article

AN - SCOPUS:83955161106

VL - 79

SP - 115

EP - 124

JO - Neurocomputing

JF - Neurocomputing

SN - 0925-2312

ER -