EMMA-V @ ESEC/FSE 2017
The International Workshop on Empirical Methods for MAintainability Validation (EMMA-V) is about empirical case studies and empirical methods on software maintainability. Maintainability is across many application domains economically one of the most relevant quality attributes of software. Unlike other software quality attributes, e.g., performance it is hard to predict as well as to measure. This workshop therefore asks for papers on empirical case studies and methods to model, analyse, predict and measure maintainability. The organisers see in community-accepted benchmark example systems a great value. Such example systems could be used to systematically evaluate and compare novel methods from research on software maintainability. In addition, real-world examples on specific maintainability challenges experienced in real-world projects are of high value and interest. Hence, the workshop tries to bring together practitioners and researcher dealing with real-world software maintainability projects, empirical case studies or maintainability community examples.
Location and Date
EMMA-V is held in conjunction with ESEC/FSE 2017 (11th Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering) in Paderborn, Germany at September, 5 2017.
- At request of the general chair of ESEC/FSE, we changed the page limit for submissions to 8 pages.
- Paper Submission: May 19, 2017
- Notification: June 23, 2017
- Camera-Ready Submission: July 3, 2017
- Workshop: September 5, 2017
Software Evolution is an established field in software engineering. Various conferences and workshops exist over decades. However, means for the empirical validation of methods to improve maintainability or to measure maintainability are still lacking. Consequences are, among others, a lack of validated metrics for maintainability, highly inaccurate estimation methods for evolution costs as well as unclear impact and applicability of methods to improve software evolution. In case of e.g., the quality attribute performance, a comparison of the model and the related benchmark provides the desired corrections to arrive at a better design solution. These kinds of models and benchmarks are not available in the case of the quality attribute evolvability. It becomes clear that the serious lack of established empirical methods to validate such metrics, models and methods exists in research. Neither controlled experiments involving humans nor software performance engineering are investigated in the domain of evolving system at large.
Our main intention for this workshop is to share and discuss concepts, tools, methods, and techniques specifically dedicated for the empirical validation of software evolution research. Secondly, we are interested in experiences and studies of empirical validations of research results.
Relevance for Software Engineering
Studies from several domains in software engineering show that considerably more than 50 per cent of IT budgets are spent in software evolution. However, research in software engineering is still mainly focused on early phases in software development. For that reason, software evolution is an established field of research in software engineering with dedicated conferences running over decades. Interestingly, it is related with many other fields in software engineering, such as software design, software quality and software processes. But a closer look reveals, that research in software evolution poses quite a challenge when it comes to empirical validation of results.
Results of this research, either methods to support evolution, metrics to measure evolvability or tools for developers, share the challenge to show their validity, appropriateness and applicability. This is due to the difficulties of simulating software ageing in advance. A posteriori research on existing software repositories fails to frequently missing scientifically usable documentation. In addition, design decisions are rarely documented in software projects of a larger size comprehensively for their life-cycle. For research this means, as the field of software evolution suffers from a lack of validated metrics for maintainability, from highly inaccurate estimation methods for evolution costs as well as from unclear impact and applicability of methods to improve software evolution. If one compares the situation to other quality attributes of software, in particular performance, where an improvement of metrics, models and methods can be identified quickly, well-founded empirical methods for validation of evolving systems becomes very necessary.
This workshop targets exactly this obstacle of research in software evolution. Therefore, the workshop is dedicated to collect empirical methods to validate research on software maintainability and successful empirical research. Under empirical research we understand, according to Runeson et al. [P. Runeson, M. Höst, A. Rainer, and B. Regnell. Case Study Research in Software Engineering: Guidelines and Examples. Wiley. 2012. (Link)], not only experiments with humans, but also case study research and surveys. Thus, to learn from each other, both contributions on software performance engineering as well as on controlled experiments involving humans are welcome. We are in particular interested in appropriate ways to empirically evaluating approaches for managed software evolution. Under managed software evolution we understand the integration of phases of operation and the different artefacts of software engineering towards a novel methodology for long-living software engineering. This includes an ongoing development that adapts software continuously to new requirements, changes in usage and platforms.
Submissions of two kinds (i) methods of empirical validation for research in software maintainability and (ii) reports on the application of empirical research methods applied to research in software maintainability are welcome. We ask for original contributions from research and industry including, but not limited to:
- Empirical methods specifically tailored for the empirical validation of research in software maintainability
- Case study proposals for community accepted example systems
- Discussion of the feasibility of controlled experiments
- Validations of research with empirical methods
- Experiences in the applications of empirical methods, in particular planned or executed case studies, for example from the fields
- Co-Evolution / life cycle management
- Run-time models
- Knowledge carrying software/models/code
- Security aspects
- Industry experience reports from real-world projects
We intend to have fruitful discussions upon each presentation. To stimulate that, each paper is assigned a devil's advocate, who is supposed to prepare a set of up to three controversial questions, and to step into the discussion when appropriate. In addition, we allocate one additional discussion slot of 30 minutes at the end of the workshop to address common issues raised during the presentations or for related research questions.
The workshop solicits novel research and industry papers with a maximum length of 8 pages (including figures, tables, appendices and references) in conformance to the ESEC/FSE 2017 Format and Submission Guidelines
The international program committee of the workshop will perform a peer review of submitted papers with two reviews per paper. As a result the most suitable and relevant papers will be invited for presentation at the workshop. At least one author of each accepted paper must register and present the paper at the workshop in order for the paper to be published in the workshop proceedings.
The international program committee of the workshop will perform a peer review of submitted papers with two reviews per paper. As a result the most suitable and relevant papers will be invited for presentation at the workshop. At least one author of each accepted paper must register and present the paper at workshop in order for the paper to be published in the workshop proceedings in the ACM Digital Library.
Paper submission will be managed with the Easychair submission system under: https://easychair.org/conferences/?conf=emmav2017
Please indicate which type of paper you submit: research or industry.
- Wilhelm Hasselbring, University Kiel, DE
- Bara Buhnova, Masaryk University Brno, CZ
- Lutz Prechelt, FU Berlin, DE
- Björn Regnell, Lund University, SE
- Walter Tichy, KIT, DE
- Birgit Vogel-Heuser, TU Munich, DE
EMMA-V will be located at the Heinz Nixdorf MuseumsForum, Paderborn.
It will run jointly with ESEC/FSE 2017.