Adversarial robustness of linear models: Regularization and dimensionality

István Megyeri, István Hegedűs, M. Jelasity

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Many machine learning models are sensitive to adversarial input, meaning that very small but carefully designed noise added to correctly classified examples may lead to misclassification. The reasons for this are still poorly understood, even in the simple case of linear models. Here, we study linear models and offer a number of novel insights. We focus on the effect of regularization and dimensionality. We show that in very high dimensions adversarial robustness is inherently very low due to some mathematical properties of high-dimensional spaces that have received little attention so far. We also demonstrate that—although regularization may help—adversarial robustness is harder to achieve than high accuracy during the learning process. This is typically overlooked when researchers set optimization meta-parameters.

Original languageEnglish
Title of host publicationESANN 2019 - Proceedings, 27th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning
PublisherESANN (i6doc.com)
Pages61-66
Number of pages6
ISBN (Electronic)9782875870650
Publication statusPublished - Jan 1 2019
Event27th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, ESANN 2019 - Bruges, Belgium
Duration: Apr 24 2019Apr 26 2019

Publication series

NameESANN 2019 - Proceedings, 27th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning

Conference

Conference27th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, ESANN 2019
CountryBelgium
CityBruges
Period4/24/194/26/19

Fingerprint

Learning systems

ASJC Scopus subject areas

  • Artificial Intelligence
  • Information Systems

Cite this

Megyeri, I., Hegedűs, I., & Jelasity, M. (2019). Adversarial robustness of linear models: Regularization and dimensionality. In ESANN 2019 - Proceedings, 27th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (pp. 61-66). (ESANN 2019 - Proceedings, 27th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning). ESANN (i6doc.com).

Adversarial robustness of linear models : Regularization and dimensionality. / Megyeri, István; Hegedűs, István; Jelasity, M.

ESANN 2019 - Proceedings, 27th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning. ESANN (i6doc.com), 2019. p. 61-66 (ESANN 2019 - Proceedings, 27th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Megyeri, I, Hegedűs, I & Jelasity, M 2019, Adversarial robustness of linear models: Regularization and dimensionality. in ESANN 2019 - Proceedings, 27th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning. ESANN 2019 - Proceedings, 27th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, ESANN (i6doc.com), pp. 61-66, 27th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, ESANN 2019, Bruges, Belgium, 4/24/19.
Megyeri I, Hegedűs I, Jelasity M. Adversarial robustness of linear models: Regularization and dimensionality. In ESANN 2019 - Proceedings, 27th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning. ESANN (i6doc.com). 2019. p. 61-66. (ESANN 2019 - Proceedings, 27th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning).
Megyeri, István ; Hegedűs, István ; Jelasity, M. / Adversarial robustness of linear models : Regularization and dimensionality. ESANN 2019 - Proceedings, 27th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning. ESANN (i6doc.com), 2019. pp. 61-66 (ESANN 2019 - Proceedings, 27th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning).
@inproceedings{63d8205a448449debbe1af99e7f319cf,
title = "Adversarial robustness of linear models: Regularization and dimensionality",
abstract = "Many machine learning models are sensitive to adversarial input, meaning that very small but carefully designed noise added to correctly classified examples may lead to misclassification. The reasons for this are still poorly understood, even in the simple case of linear models. Here, we study linear models and offer a number of novel insights. We focus on the effect of regularization and dimensionality. We show that in very high dimensions adversarial robustness is inherently very low due to some mathematical properties of high-dimensional spaces that have received little attention so far. We also demonstrate that—although regularization may help—adversarial robustness is harder to achieve than high accuracy during the learning process. This is typically overlooked when researchers set optimization meta-parameters.",
author = "Istv{\'a}n Megyeri and Istv{\'a}n Hegedűs and M. Jelasity",
year = "2019",
month = "1",
day = "1",
language = "English",
series = "ESANN 2019 - Proceedings, 27th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning",
publisher = "ESANN (i6doc.com)",
pages = "61--66",
booktitle = "ESANN 2019 - Proceedings, 27th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning",

}

TY - GEN

T1 - Adversarial robustness of linear models

T2 - Regularization and dimensionality

AU - Megyeri, István

AU - Hegedűs, István

AU - Jelasity, M.

PY - 2019/1/1

Y1 - 2019/1/1

N2 - Many machine learning models are sensitive to adversarial input, meaning that very small but carefully designed noise added to correctly classified examples may lead to misclassification. The reasons for this are still poorly understood, even in the simple case of linear models. Here, we study linear models and offer a number of novel insights. We focus on the effect of regularization and dimensionality. We show that in very high dimensions adversarial robustness is inherently very low due to some mathematical properties of high-dimensional spaces that have received little attention so far. We also demonstrate that—although regularization may help—adversarial robustness is harder to achieve than high accuracy during the learning process. This is typically overlooked when researchers set optimization meta-parameters.

AB - Many machine learning models are sensitive to adversarial input, meaning that very small but carefully designed noise added to correctly classified examples may lead to misclassification. The reasons for this are still poorly understood, even in the simple case of linear models. Here, we study linear models and offer a number of novel insights. We focus on the effect of regularization and dimensionality. We show that in very high dimensions adversarial robustness is inherently very low due to some mathematical properties of high-dimensional spaces that have received little attention so far. We also demonstrate that—although regularization may help—adversarial robustness is harder to achieve than high accuracy during the learning process. This is typically overlooked when researchers set optimization meta-parameters.

UR - http://www.scopus.com/inward/record.url?scp=85071309601&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85071309601&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:85071309601

T3 - ESANN 2019 - Proceedings, 27th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning

SP - 61

EP - 66

BT - ESANN 2019 - Proceedings, 27th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning

PB - ESANN (i6doc.com)

ER -