Consistency of QSAR models: Correct split of training and test sets, ranking of models and performance parameters

A. Rácz, D. Bajusz, K. Héberger

Research output: Contribution to journalArticle

45 Citations (Scopus)

Abstract

Recent implementations of QSAR modelling software provide the user with numerous models and a wealth of information. In this work, we provide some guidance on how one should interpret the results of QSAR modelling, compare and assess the resulting models, and select the best and most consistent ones. Two QSAR datasets are applied as case studies for the comparison of model performance parameters and model selection methods. We demonstrate the capabilities of sum of ranking differences (SRD) in model selection and ranking, and identify the best performance indicators and models. While the exchange of the original training and (external) test sets does not affect the ranking of performance parameters, it provides improved models in certain cases (despite the lower number of molecules in the training set). Performance parameters for external validation are substantially separated from the other merits in SRD analyses, highlighting their value in data fusion.

Original languageEnglish
Pages (from-to)683-700
Number of pages18
JournalSAR and QSAR in environmental research
Volume26
Issue number7-9
DOIs
Publication statusPublished - Sep 2 2015

Keywords

  • cross-validation
  • model selection
  • performance parameters
  • ranking
  • sum of ranking differences

ASJC Scopus subject areas

  • Bioengineering
  • Molecular Medicine
  • Drug Discovery

Fingerprint Dive into the research topics of 'Consistency of QSAR models: Correct split of training and test sets, ranking of models and performance parameters'. Together they form a unique fingerprint.

  • Cite this