Processing models efficiently is an important productivity factor in Model-Driven Engineering (MDE) processes. In order to optimize a toolchain to meet scalability requirements of complex MDE scenarios, reliable performance measures of different tools are key enablers that can help selecting the best tool for a given workload. To enable systematic and reproducible benchmarking across different domains, scenarios and workloads, we propose MONDO-SAM, an extensible MDE benchmarking framework. Beyond providing easily reusable features for common benchmarking tasks that are based on best practices, our framework puts special emphasis on metrics, which enables scalability analysis along different problem characteristics. To illustrate the practical applicability of our proposal, we demonstrate how different variants of a model validation benchmark featuring several MDE tools from various technological domains have been integrated into the system.
|Number of pages||4|
|Journal||CEUR Workshop Proceedings|
|Publication status||Published - Jan 1 2014|
|Event||2nd Workshop on Scalability in Model Driven Engineering, BigMDE 2014, co-located with the Software Technologies: Applications and Foundations Conference, STAF 2014 - York, United Kingdom|
Duration: Jul 24 2014 → Jul 24 2014
ASJC Scopus subject areas
- Computer Science(all)