An input data set compression method for improving the training ability of neural networks

Balázs Tusor, A. Várkonyi-Kóczy, I. Rudas, Gábor Klie, Gábor Kocsis

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Citation (Scopus)

Abstract

Artificial Neural Networks (ANNs) can learn complex functions from the input data and are relatively easy to implement in any application. On the other hand, a significant disadvantage of their usage is they usually high training time-need, which scales with the structural parameters of the networks and the quantity of input data. However, this can be done offline; the training has a non-negligible cost and further, can cause a delay in the operation. To increase the speed of the training of the ANNs used for classification, we have developed a new training procedure: instead of directly using the training data in the training phase, the data is first clustered and the ANNs are trained by using only the centers of the obtained clusters (which are basically the compressed versions of the original input data).

Original languageEnglish
Title of host publication2012 IEEE I2MTC - International Instrumentation and Measurement Technology Conference, Proceedings
Pages1774-1779
Number of pages6
DOIs
Publication statusPublished - 2012
Event2012 IEEE International Instrumentation and Measurement Technology Conference, I2MTC 2012 - Graz, Austria
Duration: May 13 2012May 16 2012

Other

Other2012 IEEE International Instrumentation and Measurement Technology Conference, I2MTC 2012
CountryAustria
CityGraz
Period5/13/125/16/12

Fingerprint

Neural networks
Costs

Keywords

  • artificial neural networks
  • class number reductions
  • classification
  • clustering
  • fuuzy neural networks
  • input data compression
  • reinforced learning
  • supervised learning

ASJC Scopus subject areas

  • Electrical and Electronic Engineering

Cite this

Tusor, B., Várkonyi-Kóczy, A., Rudas, I., Klie, G., & Kocsis, G. (2012). An input data set compression method for improving the training ability of neural networks. In 2012 IEEE I2MTC - International Instrumentation and Measurement Technology Conference, Proceedings (pp. 1774-1779). [6229471] https://doi.org/10.1109/I2MTC.2012.6229471

An input data set compression method for improving the training ability of neural networks. / Tusor, Balázs; Várkonyi-Kóczy, A.; Rudas, I.; Klie, Gábor; Kocsis, Gábor.

2012 IEEE I2MTC - International Instrumentation and Measurement Technology Conference, Proceedings. 2012. p. 1774-1779 6229471.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Tusor, B, Várkonyi-Kóczy, A, Rudas, I, Klie, G & Kocsis, G 2012, An input data set compression method for improving the training ability of neural networks. in 2012 IEEE I2MTC - International Instrumentation and Measurement Technology Conference, Proceedings., 6229471, pp. 1774-1779, 2012 IEEE International Instrumentation and Measurement Technology Conference, I2MTC 2012, Graz, Austria, 5/13/12. https://doi.org/10.1109/I2MTC.2012.6229471
Tusor B, Várkonyi-Kóczy A, Rudas I, Klie G, Kocsis G. An input data set compression method for improving the training ability of neural networks. In 2012 IEEE I2MTC - International Instrumentation and Measurement Technology Conference, Proceedings. 2012. p. 1774-1779. 6229471 https://doi.org/10.1109/I2MTC.2012.6229471
Tusor, Balázs ; Várkonyi-Kóczy, A. ; Rudas, I. ; Klie, Gábor ; Kocsis, Gábor. / An input data set compression method for improving the training ability of neural networks. 2012 IEEE I2MTC - International Instrumentation and Measurement Technology Conference, Proceedings. 2012. pp. 1774-1779
@inproceedings{a91ed66b482c4d488c05f1748d6acc22,
title = "An input data set compression method for improving the training ability of neural networks",
abstract = "Artificial Neural Networks (ANNs) can learn complex functions from the input data and are relatively easy to implement in any application. On the other hand, a significant disadvantage of their usage is they usually high training time-need, which scales with the structural parameters of the networks and the quantity of input data. However, this can be done offline; the training has a non-negligible cost and further, can cause a delay in the operation. To increase the speed of the training of the ANNs used for classification, we have developed a new training procedure: instead of directly using the training data in the training phase, the data is first clustered and the ANNs are trained by using only the centers of the obtained clusters (which are basically the compressed versions of the original input data).",
keywords = "artificial neural networks, class number reductions, classification, clustering, fuuzy neural networks, input data compression, reinforced learning, supervised learning",
author = "Bal{\'a}zs Tusor and A. V{\'a}rkonyi-K{\'o}czy and I. Rudas and G{\'a}bor Klie and G{\'a}bor Kocsis",
year = "2012",
doi = "10.1109/I2MTC.2012.6229471",
language = "English",
isbn = "9781457717710",
pages = "1774--1779",
booktitle = "2012 IEEE I2MTC - International Instrumentation and Measurement Technology Conference, Proceedings",

}

TY - GEN

T1 - An input data set compression method for improving the training ability of neural networks

AU - Tusor, Balázs

AU - Várkonyi-Kóczy, A.

AU - Rudas, I.

AU - Klie, Gábor

AU - Kocsis, Gábor

PY - 2012

Y1 - 2012

N2 - Artificial Neural Networks (ANNs) can learn complex functions from the input data and are relatively easy to implement in any application. On the other hand, a significant disadvantage of their usage is they usually high training time-need, which scales with the structural parameters of the networks and the quantity of input data. However, this can be done offline; the training has a non-negligible cost and further, can cause a delay in the operation. To increase the speed of the training of the ANNs used for classification, we have developed a new training procedure: instead of directly using the training data in the training phase, the data is first clustered and the ANNs are trained by using only the centers of the obtained clusters (which are basically the compressed versions of the original input data).

AB - Artificial Neural Networks (ANNs) can learn complex functions from the input data and are relatively easy to implement in any application. On the other hand, a significant disadvantage of their usage is they usually high training time-need, which scales with the structural parameters of the networks and the quantity of input data. However, this can be done offline; the training has a non-negligible cost and further, can cause a delay in the operation. To increase the speed of the training of the ANNs used for classification, we have developed a new training procedure: instead of directly using the training data in the training phase, the data is first clustered and the ANNs are trained by using only the centers of the obtained clusters (which are basically the compressed versions of the original input data).

KW - artificial neural networks

KW - class number reductions

KW - classification

KW - clustering

KW - fuuzy neural networks

KW - input data compression

KW - reinforced learning

KW - supervised learning

UR - http://www.scopus.com/inward/record.url?scp=84864219937&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84864219937&partnerID=8YFLogxK

U2 - 10.1109/I2MTC.2012.6229471

DO - 10.1109/I2MTC.2012.6229471

M3 - Conference contribution

SN - 9781457717710

SP - 1774

EP - 1779

BT - 2012 IEEE I2MTC - International Instrumentation and Measurement Technology Conference, Proceedings

ER -