Home | Sitemap | Index | Contact | Legals | KIT


The Workshop on Empirical Methods for MAintainability Validation (EMMA-V) is about empirical case studies and empirical methods on software maintainability. Maintainability is across many application domains economically one of the most relevant quality attributes of software. Unlike other software quality attributes, e.g., performance it is hard to predict as well as to measure. This workshop therefore asks for papers on empirical case studies and methods to model, analyse, predict and measure maintainability. The organisers see in community-accepted benchmark example systems a great value. Such example systems could be used to systematically evaluate and compare novel methods from research on software maintainability. In addition, real-world examples on specific maintainability challenges experienced in real-world projects are of high value and interest. Hence, the workshop tries to bring together practitioners and researcher dealing with real-world software maintainability projects, empirical case studies or maintainability community examples.

Location and Date

EMMA-V is held in conjunction with ESEC/FSE 2017 (11th Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering) in Paderborn, Germany. The workshop date is not fixed yet, but will around 4-8th of September 2017.


  • Important dates will be fixed soon

Important Dates

  • Abstract Submission: tba.
  • Paper Submission: tba.
  • Notification: tba.
  • Camera-Ready Submission: tba.


Software Evolution is an established field in software engineering. Various conferences and workshops exist over decades. However, means for the empirical validation of methods to improve maintainability or to measure maintainability are still lacking. Consequences are, among others, a lack of validated metrics for maintainability, highly inaccurate estimation methods for evolution costs as well as unclear impact and applicability of methods to improve software evolution. In case of e.g., the quality attribute performance, a comparison of the model and the related benchmark provides the desired corrections to arrive at a better design solution. These kinds of models and benchmarks are not available in the case of the quality attribute evolvability. It becomes clear that the serious lack of established empirical methods to validate such metrics, models and methods exists in research.Neither controlled experiments involving humans nor software performance engineering are investigated in the domain of evolving system at large.


Our main intention for this workshop is to share and discuss concepts, tools, methods, and techniques specifically dedicated for the empirical validation of software evolution research. Secondly, we are interested in experiences and studies of empirical validations of research results.

Relevance for Software Engineering

Studies from several domains in software engineering show that considerably more than 50 per cent of IT budgets are spent in software evolution. However, research in software engineering is still mainly focused on early phases in software development. For that reason, software evolution is an established field of research in software engineering with dedicated conferences running over decades. Interestingly, it is related with many other fields in software engineering, such as software design, software quality and software processes. But a closer look reveals, that research in software evolution poses quite a challenge when it comes to empirical validation of results.

Results of this research, either methods to support evolution, metrics to measure evolvability or tools for developers, share the challenge to show their validity, appropriateness and applicability. This is due to the difficulties of simulating software ageing in advance. A posteriori research on existing software repositories fails to frequently missing scientifically usable documentation. In addition, design decisions are rarely documented in software projects of a larger size comprehensively for their life-cycle. For research this means, as the field of software evolution suffers from a lack of validated metrics for maintainability, from highly inaccurate estimation methods for evolution costs as well as from unclear impact and applicability of methods to improve software evolution. If one compares the situation to other quality attributes of software, in particular performance, where an improvement of metrics, models and methods can be identified quickly, well-founded empirical methods for validation of evolving systems becomes very necessary.

This workshop targets exactly this obstacle of research in software evolution. Therefore, the workshop is dedicated to collect empirical methods to validate research on software maintainability and successful empirical research. Under empirical research we understand, according to P. Runeson [Per Runeson. Real World Research. Blackwell, 2nd edition. 2002], not only experiments with humans, but also case study research and surveys. Thus, to learn from each other, both contributions on software performance engineering as well as on controlled experiments involving humans are welcome. We are in particular interested in appropriate ways to empirically evaluating approaches for managed software evolution. Under managed software evolution we understand the integration of phases of operation and the different artefacts of software engineering towards a novel methodology for long-living software engineering. This includes an ongoing development that adapts software continuously to new requirements, changes in usage and platforms.


Submissions of two kinds (i) methods of empirical validation for research in software maintainability and (ii) reports on the application of empirical research methods applied to research in software maintainability are welcome. We ask for original papers including, but not limited to:

  • Empirical methods specifically tailored for the empirical validation of research in software maintainability
  • Case study proposals for community accepted example systems
  • Discussion of the feasibility of controlled experiments
  • Validations of research with empirical methods
  • Experiences in the applications of empirical methods, in particular planned or executed case studies, for example from the fields
    • Co-Evolution / life cycle management
    • Run-time models
    • Knowledge carrying software/models/code
    • Variability-awareness
    • Security aspects

Invited Speaker

to be announced.

Workshop Programme

We start with a invited talk in the morning, followed by three sessions with presentations and discussions on accepted research papers.

Details to be announced.

Discussion Stimulation

We intend to have fruitful discussions upon each presentation. To stimulate that, each paper is assigned a devil's advocate, who is supposed to prepare a set of up to three controversial questions, and to step into the discussion when appropriate. In addition, we allocate one additional discussion slot of 30 minutes at the end of the workshop to address common issues raised during the presentations or for related research questions.


The workshop solicits research and industry papers with a length of 8-10 pages (including figures, tables, appendices and references) in conformance to the ESEC/FSE 2017 Format and Submission Guidelines

The international program committee of the workshop will perform a peer review of submitted papers with two reviews per paper. As a result the most suitable and relevant papers will be invited for presentation at the workshop. At least one author of each accepted paper must register and present the paper at workshop in order for the paper to be published in the workshop proceedings.

Paper submission will be managed with the Easychair submission system: URL tba.


Programme Committee

To be announced.


EMMA-V will be located at the Heinz Nixdorf MuseumsForum, Paderborn.
It will run jointly with ESEC/FSE 2017.

Associated to