Home | Sitemap | Index | Contact | Legals | KIT

Publications of Anne Koziolek (Martens)

Refereed Journal Articles

[1] Anne Koziolek, Alberto Avritzer, Sindhu Suresh, Daniel S. Menasché, Morganna Diniz, Edmundo de Souza e Silva, Rosa M. Leão, Kishor Trivedi, and Lucia Happe. Assessing survivability to support power grid investment decisions. Reliability Engineering & System Safety, 155:30 - 43, 2016. [ bib | DOI | http | .pdf | Abstract ]
Abstract The reliability of power grids has been subject of study for the past few decades. Traditionally, detailed models are used to assess how the system behaves after failures. Such models, based on power flow analysis and detailed simulations, yield accurate characterizations of the system under study. However, they fall short on scalability. In this paper, we propose an efficient and scalable approach to assess the survivability of power systems. Our approach takes into account the phased-recovery of the system after a failure occurs. The proposed phased-recovery model yields metrics such as the expected accumulated energy not supplied between failure and full recovery. Leveraging the predictive power of the model, we use it as part of an optimization framework to assist in investment decisions. Given a budget and an initial circuit to be upgraded, we propose heuristics to sample the solution space in a principled way accounting for survivability-related metrics. We have evaluated the feasibility of this approach by applying it to the design of a benchmark distribution automation circuit. Our empirical results indicate that the combination of survivability and power flow analysis can provide meaningful investment decision support for power systems engineers.
[2] Eya Ben Charrada, Anne Koziolek, and Martin Glinz. Supporting requirements update during software evolution. Journal of Software: Evolution and Process, 27(3):166-194, 2015. [ bib | DOI | http | http | Abstract ]
Updating the requirements specification when software systems evolve is a manual task that is expensive and time consuming. Therefore, maintainers usually apply the changes to the code directly and leave the requirements unchanged. This results in the requirements rapidly becoming obsolete and useless. In this paper, we propose an approach that supports the maintainer in keeping the requirements specification consistent with the implementation, by identifying the requirements that are impacted whenever the code is changed. Our approach works as follows. First, we analyse the changes that have been applied to the source code and detect if they are likely to impact the requirements or not. Second, we trace the requirements impacting changes back to the requirements specification to identify the parts that might need to be modified. The output of the tracing is a list of requirements that are sorted according to their likelihood of being impacted. Automatically identifying the parts of the requirements specification that are likely to need maintenance reduces the effort needed for keeping the requirements up-to-date and thus makes the task of the maintainer easier. When applying our approach in three cases studies, 70% to 100% of the impacted requirements were identified within a list that includes less than 20% of the total number of requirements in the specification
[3] Fabian Brosig, Philipp Meier, Steffen Becker, Anne Koziolek, Heiko Koziolek, and Samuel Kounev. Quantitative evaluation of model-driven performance analysis and simulation of component-based architectures. Software Engineering, IEEE Transactions on, 41(2):157-175, Feb 2015. [ bib | DOI | Abstract ]
During the last decade, researchers have proposed a number of model transformations enabling performance predictions. These transformations map performance-annotated software architecture models into stochastic models solved by analytical means or by simulation. However, so far, a detailed quantitative evaluation of the accuracy and efficiency of different transformations is missing, making it hard to select an adequate transformation for a given context. This paper provides an in-depth comparison and quantitative evaluation of representative model transformations to, e.g., Queueing Petri Nets and Layered Queueing Networks. The semantic gaps between typical source model abstractions and the different analysis techniques are revealed. The accuracy and efficiency of each transformation are evaluated by considering four case studies representing systems of different size and complexity. The presented results and insights gained from the evaluation help software architects and performance engineers to select the appropriate transformation for a given context, thus significantly improving the usability of model transformations for performance prediction.
[4] Daniel Sadoc Menasché, Alberto Avritzer, Sindhu Suresh, Rosa M. Leão, Edmundo de Souza e Silva, Morganna Diniz, Kishor Trivedi, Lucia Happe, and Anne Koziolek. Assessing survivability of smart grid distribution network designs accounting for multiple failures. Concurrency and Computation: Practice and Experience, 26(12):1949-1974, 2014. [ bib | DOI | http | .pdf | Abstract ]
Smart grids are fostering a paradigm shift in the realm of power distribution systems. Whereas traditionally different components of the power distribution system have been provided and analyzed by different teams through different lenses, smart grids require a unified and holistic approach that takes into consideration the interplay of communication reliability, energy backup, distribution automation topology, energy storage, and intelligent features such as automated fault detection, isolation, and restoration (FDIR) and demand response. In this paper, we present an analytical model and metrics for the survivability assessment of the distribution power grid network. The proposed metrics extend the system average interruption duration index, accounting for the fact that after a failure, the energy demand and supply will vary over time during a multi-step recovery process. The analytical model used to compute the proposed metrics is built on top of three design principles: state space factorization, state aggregation, and initial state conditioning. Using these principles, we reduce a Markov chain model with large state space cardinality to a set of much simpler models that are amenable to analytical treatment and efficient numerical solution. In case demand response is not integrated with FDIR, we provide closed form solutions to the metrics of interest, such as the mean time to repair a given set of sections. Under specific independence assumptions, we show how the proposed methodology can be adapted to account for multiple failures. We have evaluated the presented model using data from a real power distribution grid, and we have found that survivability of distribution power grids can be improved by the integration of the demand response feature with automated FDIR approaches. Our empirical results indicate the importance of quantifying survivability to support investment decisions at different parts of the power grid distribution network.
[5] Catia Trubiani, Anne Koziolek, Vittorio Cortellessa, and Ralf Reussner. Guilt-based handling of software performance antipatterns in Palladio architectural models. Journal of Systems and Software, 95:141 - 165, 2014. [ bib | DOI | http | .pdf | Abstract ]
Antipatterns are conceptually similar to patterns in that they document recurring solutions to common design problems. Software Performance Antipatterns document common performance problems in the design as well as their solutions. The definition of performance antipatterns concerns software properties that can include static, dynamic, and deployment aspects. To make use of such knowledge, we propose an approach that helps software architects to identify and solve performance antipatterns. Our approach provides software performance feedback to architects, since it suggests the design alternatives that allow overcoming the detected performance problems. The feedback process may be quite complex since architects may have to assess several design options before achieving the architectural model that best fits the end-user expectations. In order to optimise such process we introduce a ranking methodology that identifies, among a set of detected antipatterns, the “guilty” ones, i.e. the antipatterns that more likely contribute to the violation of specific performance requirements. The introduction of our ranking process leads the system to converge towards the desired performance improvement by discarding a consistent part of design alternatives. Four case studies in different application domains have been used to assess the validity of the approach.
[6] Nikolaus Huber, André van Hoorn, Anne Koziolek, Fabian Brosig, and Samuel Kounev. Modeling Run-Time Adaptation at the System Architecture Level in Dynamic Service-Oriented Environments. Service Oriented Computing and Applications Journal (SOCA), 8(1):73-89, 2014, Springer London. [ bib | DOI | .pdf ]
[7] Aldeida Aleti, Barbora Buhnova, Lars Grunske, Anne Koziolek, and Indika Meedeniya. Software architecture optimization methods: A systematic literature review. IEEE Transactions on Software Engineering, 39(5):658-683, 2013, IEEE. [ bib | DOI | .pdf | Abstract ]
Due to significant industrial demands toward software systems with increasing complexity and challenging quality requirements, software architecture design has become an important development activity and the research domain is rapidly evolving. In the last decades, software architecture optimization methods, which aim to automate the search for an optimal architecture design with respect to a (set of) quality attribute(s), have proliferated. However, the reported results are fragmented over different research communities, multiple system domains, and multiple quality attributes. To integrate the existing research results, we have performed a systematic literature review and analyzed the results of 188 research papers from the different research communities. Based on this survey, a taxonomy has been created which is used to classify the existing research. Furthermore, the systematic analysis of the research literature provided in this review aims to help the research community in consolidating the existing research efforts and deriving a research agenda for future developments.
[8] Anne Koziolek, Danilo Ardagna, and Raffaela Mirandola. Hybrid multi-attribute QoS optimization in component based software systems. Journal of Systems and Software, 86(10):2542 - 2558, 2013, Elsevier. Special Issue on Quality Optimization of Software Architecture and Design Specifications. [ bib | DOI | http | .pdf | Abstract ]
Design decisions for complex, component-based systems impact multiple quality of service (QoS) properties. Often, means to improve one quality property deteriorate another one. In this scenario, selecting a good solution with respect to a single quality attribute can lead to unacceptable results with respect to the other quality attributes. A promising way to deal with this problem is to exploit multi-objective optimization where the objectives represent different quality attributes. The aim of these techniques is to devise a set of solutions, each of which assures an optimal trade-off between the conflicting qualities. Our previous work proposed a combined use of analytical optimization techniques and evolutionary algorithms to efficiently identify an optimal set of design alternatives with respect to performance and costs. This paper extends this approach to more QoS properties by providing analytical algorithms for availability-cost optimization and three-dimensional availability-performance-cost optimization. We demonstrate the use of this approach on a case study, showing that the analytical step provides a better-than-random starting population for the evolutionary optimization, which lead to a speed-up of 28% in the availability-cost case.
[9] Daniel Dominguez Gouvêa, Cyro Muniz, Gilson Pinto, Alberto Avritzer, Rosa Maria Meri Leão, Edmundo de Souza e Silva, Morganna Carmem Diniz, Luca Berardinelli, Julius C. B. Leite, Daniel Mossé, Yuanfang Cai, Michael Dalton, Lucia Happe, and Anne Koziolek. Experience with model-based performance, reliability and adaptability assessment of a complex industrial architecture. Journal of Software and Systems Modeling, pages 1-23, 2012, Springer-Verlag. Special Issue on Performance Modeling. [ bib | DOI | .pdf | Abstract ]
In this paper, we report on our experience with the application of validated models to assess performance, reliability, and adaptability of a complex mission critical system that is being developed to dynamically monitor and control the position of an oil-drilling platform. We present real-time modeling results that show that all tasks are schedulable. We performed stochastic analysis of the distribution of task execution time as a function of the number of system interfaces. We report on the variability of task execution times for the expected system configurations. In addition, we have executed a system library for an important task inside the performance model simulator. We report on the measured algorithm convergence as a function of the number of vessel thrusters. We have also studied the system architecture adaptability by comparing the documented system architecture and the implemented source code. We report on the adaptability findings and the recommendations we were able to provide to the system's architect. Finally, we have developed models of hardware and software reliability. We report on hardware and software reliability results based on the evaluation of the system architecture.
[10] Anne Martens, Heiko Koziolek, Lutz Prechelt, and Ralf Reussner. From monolithic to component-based performance evaluation of software architectures. Empirical Software Engineering, 16(5):587-622, 2011, Springer Netherlands. [ bib | DOI | http | .pdf | Abstract ]
Background: Model-based performance evaluation methods for software architectures can help architects to assess design alternatives and save costs for late life-cycle performance fixes. A recent trend is component-based performance modelling, which aims at creating reusable performance models; a number of such methods have been proposed during the last decade. Their accuracy and the needed effort for modelling are heavily influenced by human factors, which are so far hardly understood empirically.

Objective: Do component-based methods allow to make performance predictions with a comparable accuracy while saving effort in a reuse scenario? We examined three monolithic methods (SPE, umlPSI, Capacity Planning (CP)) and one component-based performance evaluation method (PCM) with regard to their accuracy and effort from the viewpoint of method users.

Methods: We conducted a series of three experiments (with different levels of control) involving 47 computer science students. In the first experiment, we compared the applicability of the monolithic methods in order to choose one of them for comparison. In the second experiment, we compared the accuracy and effort of this monolithic and the component-based method for the model creation case. In the third, we studied the effort reduction from reusing component-based models. Data were collected based on the resulting artefacts, questionnaires and screen recording. They were analysed using hypothesis testing, linear models, and analysis of variance.

Results: For the monolithic methods, we found that using SPE and CP resulted in accurate predictions, while umlPSI produced over-estimates. Comparing the component-based method PCM with SPE, we found that creating reusable models using PCM takes more (but not drastically more) time than using SPE and that participants can create accurate models with both techniques. Finally, we found that reusing PCM models can save time, because effort to reuse can be explained by a model that is independent of the inner complexity of a component.

Limitations: The tasks performed in our experiments reflect only a subset of the actual activities when applying model-based performance evaluation methods in a software development process.

Conclusions: Our results indicate that sufficient prediction accuracy can be achieved with both monolithic and component-based methods, and that the higher effort for component-based performance modelling will indeed pay off when the component models incorporate and hide a sufficient amount of complexity.

Refereed Conference/Workshop Papers

[1] Erik Burger, Victoria Mittelbach, and Anne Koziolek. Model-driven consistency preservation in cyber-physical systems. In Proceedings of the 11th Workshop on Models@run.time co-located with ACM/IEEE 19th International Conference on Model Driven Engineering Languages and Systems (MODELS 2016), October 2016. CEUR Workshop Proceedings. October 2016. [ bib | http | .pdf ]
[2] Jörg Kienzle, Anne Koziolek, Axel Busch, and Ralf Reussner. Towards concern-oriented design of component-based systems. In 3rd International Workshop on Interplay of Model-Driven and Component-Based Software Engineering, October 2016. CEUR. October 2016. [ bib | Abstract ]
Component-based software engineering (CBSE) is a modern way of developing software that is based on defining, implementing and composing loosely coupled, independent components, thus increasing modularity, analysability, separation of concerns and reuse. However, separation of concerns is sometimes difficult to achieve in CBSE, as some concerns might crosscut several components. Furthermore, reuse of components is sometimes limited, in particular because component developers make certain implementation choices that are incompatible with the non-functional requirements of the application that is being built. In this paper we outline how to integrate CBSE and concern-oriented reuse (CORE), a novel reuse paradigm that extends Model-Driven Engineering (MDE) with best practices from aspect-oriented software composition and Software Product Lines (SPL). Concretely, we outline how to combine the Palladio Component Model (PCM) capable of expressing complex software architectures with CORE class and sequence diagrams for low-level design. As a result, multiple solutions for addressing concerns that might even crosscut component boundaries can be modularized in a reusable way, and integrated with applications that reuse them using aspect-oriented techniques. Additionally, thanks to CORE, component developers can avoid premature decision making when reusing existing libraries during implementation.
[3] Michele Ciavotta, Danilo Ardagna, and Anne Koziolek. Palladio optimization suite: Qos optimization for component-based cloud applications. In Proceedings of the 9th EAI International Conference on Performance Evaluation Methodologies and Tools, Berlin, Germany, 2016, VALUETOOLS'15, pages 170-171. ICST (Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering), ICST, Brussels, Belgium, Belgium. 2016. [ bib | DOI | http | .pdf ]
[4] Axel Busch and Anne Koziolek. Considering Not-quantified Quality Attributes in an Automated Design Space Exploration. In Proceedings of the 12th International ACM SIGSOFT Conference on the Quality of Software Architectures, Venice, Italy, 2016, QoSA'16, pages 50-59. IEEE. 2016. [ bib | DOI | .pdf | Abstract ]
In a software design process, the quality of the resulting software system is highly driven by the quality of its software architecture. In such a process trade-off decisions must be made between multiple quality attributes, such as performance or security, that are often competing. Several approaches exist to improve software architectures either quantitatively or qualitatively. The first group of approaches requires to quantify each single quality attribute to be considered in the design process, while the latter group of approaches are often fully manual processes. However, time and cost constraints often make it impossible to either quantify all relevant quality attributes or manually evaluate candidate architectures. Our approach to the problem is to quantify several most important quality requirements, combine them with several not-quantified quality attributes and use them together in an automated design space exploration process. As our basis, we used the PerOpteryx design space exploration approach, which requires quantified measures for its optimization engine, and extended it in order to combine them with not-quantified quality attributes. By this, our approach allows optimizing the design space by considering even quality attributes that can not be quantified due to cost constraints or lack of quantification methodologies. We applied our approach to two case studies to demonstrate its benefits. We showed how performance can be balanced against not-quantified quality attributes, such as security, using an example derived from an industry case study.
[5] Christian Stier and Anne Koziolek. Considering Transient Effects of Self-Adaptations in Model-Driven Performance Analyses. In 2016 12th International ACM SIGSOFT Conference on Quality of Software Architectures (QoSA), Venice, Italy, 2016, QoSA'16. ACM. 2016. [ bib | DOI | Abstract ]
Model-driven performance engineering allows software architects to reason on performance characteristics of a software system in early design phases. In recent years, model-driven analysis techniques have been developed to evaluate performance characteristics of self-adaptive software systems. These techniques aim to reason on the ability of a self-adaptive software system to fulfill performance requirements in transient phases. A transient phase is the interval in which the behavior of the system changes, e.g., due to a burst in user requests. However, the effectiveness and efficiency with which a system is able to adapt depends not only on the time when it triggers adaptation actions but also on the time at which they are completed. Executing an adaptation action can cause additional stress on the adapted system. This can further impede the performance of the system in the transient phase. Model-driven analyses of self-adaptive software do not consider these transient effects. This paper outlines an approach for evaluating transient effects in model-driven analyses of self-adaptive software systems. The evaluation applied our approach to a horizontally scaling media hosting application in three experiments. By considering the delay in booting new Virtual Machines (VMs), we were able to improve the accuracy of predicted response times. The second and third experiment demonstrated that the increased accuracy enables an early detection and resolution of design deficiencies of self-adaptive software systems.
[6] Axel Busch, Qais Noorshams, Samuel Kounev, Anne Koziolek, Ralf Reussner, and Erich Amrehn. Automated workload characterization for i/o performance analysis in virtualized environments. In Software Engineering 2016, Fachtagung des GI-Fachbereichs Softwaretechnik, 2016, pages 27-28. [ bib | .html | .pdf ]
[7] Axel Busch, Yves Schneider, Anne Koziolek, Kiana Rostami, and Jörg Kienzle. Modelling the Structure of Reusable Solutions for Architecture-based Quality Evaluation. In Proceedings of the 2nd Workshop on Cloud Security and Data Privacy by Design co-located with the 8th IEEE International Conference on Cloud Computing Technology and Science (CloudCom 2016), Luxembourg, 2016, CloudSPD'16, pages 521-526. IEEE. 2016. [ bib | DOI | http | .pdf | Abstract ]
When designing cloud applications many decisions must be made like the selection of the right set of software components. Often, there are several third-party implementations on the market from which software architects have the choice between several solutions that are functionally very similar. Even though they are comparable in functionality, the solutions differ in their quality attributes, and in their software architecture. This diversity hinders automated decision support in model-driven engineering approaches, since current state-of-the-art approaches for automated quality estimation often rely on similar architecture to compare several solutions. In this paper, we address this problem by contributing with a metamodel that unifies the architecture of several functional similar solutions, and describes the different solutions' architectural degrees of freedom. Such a model can later be used to extend the process of reuse from reusing libraries to reusing the corresponding models of these libraries with the lasting benefit of automated decision support at design-time that supports decisions when deploying applications into the cloud. Finally, we apply our approach on two intrusion detection systems.
[8] Lukas Märtin, Hauke Baller, Anne Koziolek, and Ralf H. Reussner. Fault-aware pareto frontier exploration for dependable system architectures. In Proceedings of the 3rd International Workshop on Interplay of Model-Driven and Component-Based Software Engineering (ModComp) @ MoDELS 2016, 2016. [ bib ]
[9] Lukas Märtin, Anne Koziolek, and Ralf H. Reussner. Quality-oriented decision support for maintaining architectures of fault-tolerant space systems. In Proceedings of the 2015 European Conference on Software Architecture Workshops, Dubrovnik/Cavtat, Croatia, September 7-11, 2015, Ivica Crnkovic, editor, 2015, pages 49:1-49:5. ACM. 2015. [ bib | DOI | http ]
[10] Catia Trubiani, Anne Koziolek, and Lucia Happe. Exploiting software performance engineering techniques to optimise the quality of smart grid environments. In Proceedings of the 6th ACM/SPEC International Conference on Performance Engineering, Austin, Texas, USA, 2015, ICPE '15, pages 199-202. ACM, New York, NY, USA. 2015. [ bib | DOI | http | .pdf ]
[11] Axel Busch, Qais Noorshams, Samuel Kounev, Anne Koziolek, Ralf Reussner, and Erich Amrehn. Automated Workload Characterization for I/O Performance Analysis in Virtualized Environments. In Proceedings of the ACM/SPEC International Conference on Performance Engineering, Austin, Texas, USA, 2015, ICPE '15, pages 265-276. ACM, New York, NY, USA. 2015, Acceptance Rate (Full Paper): 15/56 = 27%. [ bib | DOI | http | .pdf | Abstract ]
Next generation IT infrastructures are highly driven by virtualization technology. The latter enables flexible and efficient resource sharing allowing to improve system agility and reduce costs for IT services. Due to the sharing of resources and the increasing requirements of modern applications on I/O processing, the performance of storage systems is becoming a crucial factor. In particular, when migrating or consolidating different applications the impact on their performance behavior is often an open question. Performance modeling approaches help to answer such questions, a prerequisite, however, is to find an appropriate workload characterization that is both easy to obtain from applications as well as sufficient to capture the important characteristics of the application. In this paper, we present an automated workload characterization approach that extracts a workload model to represent the main aspects of I/O-intensive applications using relevant workload parameters, e.g., request size, read-write ratio, in virtualized environments. Once extracted, workload models can be used to emulate the workload performance behavior in real-world scenarios like migration and consolidation scenarios. We demonstrate our approach in the context of two case studies of representative system environments. We present an in-depth evaluation of our workload characterization approach showing its effectiveness in workload migration and consolidation scenarios. We use an IBM System z equipped with an IBM DS8700 and a Sun Fire system as state-of-the-art virtualized environments. Overall, the evaluation of our workload characterization approach shows promising results to capture the relevant factors of I/O-intensive applications.
[12] Alberto Avritzer, Laura Carnevali, Hamed Ghasemieh, Lucia Happe, Boudewijn R. Haverkort, Anne Koziolek, Daniel Menasche, Anne Remke, Sahra Sedigh Sarvestani, and Enrico Vicario. Survivability evaluation of gas, water and electricity infrastructures. In Proceedings of the Seventh International Workshop on the Practical Application of Stochastic Modelling (PASM), 2015, volume 310, pages 5 - 25. Electronic Notes in Theoretical Computer Science. 2015. [ bib | DOI | http | Abstract ]
Abstract The infrastructures used in cities to supply power, water and gas are consistently becoming more automated. As society depends critically on these cyber-physical infrastructures, their survivability assessment deserves more attention. In this overview, we first touch upon a taxonomy on survivability of cyber-physical infrastructures, before we focus on three classes of infrastructures (gas, water and electricity) and discuss recent modelling and evaluation approaches and challenges.
[13] Axel Busch, Misha Strittmatter, and Anne Koziolek. Assessing Security to Compare Architecture Alternatives of Component-Based Systems. In Proceedings of the IEEE International Conference on Software Quality, Reliability & Security, Vancouver, British Columbia, Canada, 2015, QRS '15, pages 99-108. IEEE Computer Society. 2015, Acceptance Rate (Full Paper): 20/91 = 22%. [ bib | DOI | .pdf | Abstract ]
Modern software development is typically performed by composing a software system from building blocks. The component-based paradigm has many advantages. However, security quality attributes of the overall architecture often remain unspecified and therefore, these cannot be considered when comparing several architecture alternatives. In this paper, we propose an approach for assessing security of component-based software architectures. Our hierarchical model uses stochastic modeling techniques and includes several security related factors, such as attackers, his goals, the security attributes of a component, and the mutual security interferences between them. Applied on a component-based architecture, our approach yields its mean time to security failure, which assesses its degree of security. We extended the Palladio Component Model (PCM) by the necessary information to be able to use it as input for the security assessment. We use the PCM representation to show the applicability of our approach on an industry related example.
[14] Christian Stier, Anne Koziolek, Henning Groenda, and Ralf Reussner. Model-Based Energy Efficiency Analysis of Software Architectures. In Proceedings of the 9th European Conference on Software Architecture (ECSA '15), Dubrovnik/Cavtat, Croatia, 2015, Lecture Notes in Computer Science. Springer. 2015, Acceptance Rate (Full Paper): 15/80 = 18.8%. [ bib | DOI | http | .pdf | Abstract ]
Design-time quality analysis of software architectures evaluates the impact of design decisions in quality dimensions such as performance. Architectural design decisions decisively impact the energy efficiency (EE) of software systems. Low EE not only results in higher operational cost due to power consumption. It indirectly necessitates additional capacity in the power distribution infrastructure of the target deployment environment. Methodologies that analyze EE of software systems are yet to reach an abstraction suited for architecture-level reasoning. This paper outlines a model-based approach for evaluating the EE of software architectures. First, we present a model that describes the central power consumption characteristics of a software system. We couple the model with an existing model-based performance prediction approach to evaluate the consumption characteristics of a software architecture in varying usage contexts. Several experiments show the accuracy of our architecture-level consumption predictions. Energy consumption predictions reach an error of less than 5.5% for stable and 3.7% for varying workloads. Finally, we present a round-trip design scenario that illustrates how the explicit consideration of EE supports software architects in making informed trade-off decisions between performance and EE.
[15] Alberto Avritzer, Laura Carnevali, Lucia Happe, Anne Koziolek, Daniel Sadoc Menasche, Marco Paolieri, and Sindhu Suresh. A scalable approach to the assessment of storm impact in distributed automation power grids. In Quantitative Evaluation of Systems, 11th International Conference, QEST 2014, Florence, Italy, September 8-10, 2014, Proceedings, Gethin Norman and William Sanders, editors, 2014, volume 8657 of Lecture Notes in Computer Science, pages 345-367. Springer-Verlag Berlin Heidelberg. 2014. [ bib | http | Abstract ]
We present models and metrics for the survivability assessment of distribution power grid networks accounting for the impact of multiple failures due to large storms. The analytical models used to compute the proposed metrics are built on top of three design principles: state space factorization, state aggregation, and initial state conditioning. Using these principles, we build scalable models that are amenable to analytical treatment and efficient numerical solution. Our models capture the impact of using reclosers and tie switches to enable faster service restoration after large storms. We have evaluated the presented models using data from a real power distribution grid impacted by a large storm: Hurricane Sandy. Our empirical results demonstrate that our models are able to efficiently evaluate the impact of storm hardening investment alternatives on customer affecting metrics such as the expected energy not supplied until complete system recovery.
[16] Rebekka Wohlrab, Thijmen de Gooijer, Anne Koziolek, and Steffen Becker. Experience of pragmatically combining RE methods for performance requirements in industry. In Proceedings of the 22nd IEEE International Requirements Engineering Conference (RE), Aug 2014, pages 344-353. [ bib | DOI | .pdf | Abstract ]
To meet end-user performance expectations, precise performance requirements are needed during development and testing, e.g., to conduct detailed performance and load tests. However, in practice, several factors complicate performance requirements elicitation: lacking skills in performance requirements engineering, outdated or unavailable functional specifications and architecture models, the specification of the system's context, lack of experience to collect good performance requirements in an industrial setting with very limited time, etc. From the small set of available non-functional requirements engineering methods, no method exists that alone leads to precise and complete performance requirements with feasible effort and which has been reported to work in an industrial setting. In this paper, we present our experiences in combining existing requirements engineering methods into a performance requirements method called PROPRE. It has been designed to require no up-to-date system documentation and to be applicable with limited time and effort. We have successfully applied PROPRE in an industrial case study from the process automation domain. Our lessons learned show that the stakeholders gathered good performance requirements which now improve performance testing.
[17] Zoya Durdik, Anne Koziolek, and Ralf Reussner. How the Understanding of the Effects of Design Decisions Informs Requirements Engineering. In Proceedings of the 2nd International Workshop on the Twin Peaks of Requirements and Architecture (TwinPeaks), May 2013, pages 14-18. [ bib | DOI ]
[18] Alberto Avritzer, Sindhu Suresh, Daniel Sadoc Menasché, Rosa Maria Meri Leão, Edmundo de Souza e Silva, Morganna Carmem Diniz, Kishor Trivedi, Lucia Happe, and Anne Koziolek. Survivability models for the assessment of smart grid distribution automation network designs. In Proceedings of the fourth ACM/SPEC International Conference on Performance Engineering (ICPE 2013), Prague, Czech Republic, 2013, ICPE '13, pages 241-252. ACM, New York, NY, USA, New York, NY, USA. 2013. [ bib | DOI | http | .pdf ]
[19] Anne Koziolek, Alberto Avritzer, Sindhu Suresh, Daniel Sadoc Menasche, Kishor Trivedi, and Lucia Happe. Design of distribution automation networks using survivability modeling and power flow equations. In Software Reliability Engineering (ISSRE), 2013 IEEE 24th International Symposium on, 2013, pages 41-50. [ bib | DOI | .pdf ]
[20] Nikolaus Huber, André van Hoorn, Anne Koziolek, Fabian Brosig, and Samuel Kounev. S/T/A: Meta-Modeling Run-Time Adaptation in Component-Based System Architectures. In Proceedings of the 9th IEEE International Conference on e-Business Engineering (ICEBE 2012), Hangzhou, China, September 9-11, 2012, pages 70-77. IEEE Computer Society, Los Alamitos, CA, USA. September 2012, Acceptance Rate (Full Paper): 19.7% (26/132). [ bib | DOI | http | .pdf | Abstract ]
Modern virtualized system environments usually host diverse applications of different parties and aim at utilizing resources efficiently while ensuring that quality-of-service requirements are continuously satisfied. In such scenarios, complex adaptations to changes in the system environment are still largely performed manually by humans. Over the past decade, autonomic self-adaptation techniques aiming to minimize human intervention have become increasingly popular. However, given that adaptation processes are usually highly system specific, it is a challenge to abstract from system details enabling the reuse of adaptation strategies. In this paper, we propose a novel modeling language (meta-model) providing means to describe system adaptation processes at the system architecture level in a generic, human-understandable and reusable way. We apply our approach to three different realistic contexts (dynamic resource allocation, software architecture optimization, and run-time adaptation planning) showing how the gap between complex manual adaptations and their autonomous execution can be closed by using a holistic model-based approach.
[21] Eya Ben Charrada, Anne Koziolek, and Martin Glinz. Identifying outdated requirements based on source code changes. In Proceedings of the 20th IEEE International Requirements Engineering Conference (RE 2012), 2012, pages 61 -70. [ bib | DOI | http | Abstract ]
Keeping requirements specifications up-to-date when systems evolve is a manual and expensive task. Software engineers have to go through the whole requirements document and look for the requirements that are affected by a change. Consequently, engineers usually apply changes to the implementation directly and leave requirements unchanged. In this paper, we propose an approach for automatically detecting outdated requirements based on changes in the code. Our approach first identifies the changes in the code that are likely to affect requirements. Then it extracts a set of keywords describing the changes. These keywords are traced to the requirements specification, using an existing automated traceability tool, to identify affected requirements. Automatically identifying outdated requirements reduces the effort and time needed for the maintenance of requirements specifications significantly and thus helps preserve the knowledge contained in them. We evaluated our approach in a case study where we analyzed two consecutive source code versions and were able to detect 12 requirements-related changes out of 14 with a precision of 79%. Then we traced a set of keywords we extracted from these changes to the requirements specification. In comparison to simply tracing changed classes to requirements, we got better results in most cases.
[22] Thijmen de Gooijer, Anton Jansen, Heiko Koziolek, and Anne Koziolek. An industrial case study of performance and cost design space exploration. In Proceedings of the 3rd ACM/SPEC International Conference on Performance Engineering, Lizy Kurian John and Diwakar Krishnamurthy, editors, Boston, Massachusetts, USA, 2012, ICPE '12, pages 205-216. ACM, New York, NY, USA. 2012, ICPE Best Industry-Related Paper Award. [ bib | DOI | http | .pdf ]
[23] Anne Koziolek. Research preview: Prioritizing quality requirements based on software architecture evaluation feedback. In Requirements Engineering: Foundation for Software Quality, Björn Regnell and Daniela Damian, editors, 2012, volume 7195 of Lecture Notes in Computer Science, pages 52-58. Springer Verlag Berlin Heidelberg. 2012. [ bib | DOI | http | .pdf | Abstract ]
[Context and motivation] Quality requirements are a main driver for architectural decisions of software systems. Although the need for iterative handling of requirements and architecture has been identified, current architecture design processes do not provide systematic, quantitative feedback for the prioritization and cost/benefit considerations for quality requirements. [Question/problem] Thus, in practice stakeholders still often state and prioritize quality requirements before knowing the software architecture, i.e. without knowledge about the quality dependencies, conflicts, incurred costs, and technical feasibility. However, as quality properties usually are cross-cutting architecture concerns, estimating the effects of design decisions is difficult. Thus, stakeholders cannot reliably know the appropriate required level of quality. [Principal ideas/results] In this research proposal, we suggest an approach to generate feedback from quantitative architecture evaluation to requirements engineering, in particular to requirements prioritization. We propose to use automated design space exploration techniques to generate information about available trade-offs. Final quality requirement prioritization is deferred until first feedback from architecture evaluation is available. [Contribution] In this paper, we present the process model of our approach enabling feedback to requirement prioritization and describe application scenarios and an example.
[24] Anne Koziolek. Architecture-driven quality requirements prioritization. In First International Workshop on the Twin Peaks of Requirements and Architecture (TwinPeaks 2012), 2012, pages 15-19. IEEE Computer Society. 2012. [ bib | DOI | http | .pdf | Abstract ]
Quality requirements are main drivers for architectural decisions of software systems. However, in practice they are often dismissed during development, because of initially unknown dependencies and consequences that complicate implementation. To decide for meaningful, feasible quality requirements and trade them off with functional requirements, tighter integration of software architecture evaluation and requirements prioritization is necessary. In this position paper, we propose a tool-supported method for architecture-driven feedback into requirements prioritization. Our method uses automated design space exploration based on quantitative quality evaluation of software architecture models. It helps requirements analysts and software architects to study the quality trade-offs of a software architecture, and use this information for requirements prioritization.
[25] Anne Koziolek, Lucia Happe, Alberto Avritzer, and Sindhu Suresh. A common analysis framework for smart distribution networks applied to survivability analysis of distribution automation. In Proceedings of the First International Workshop on Software Engineering Challenges for the Smart Grid (SE-SmartGrids 2012), 2012, pages 23-29. IEEE. 2012. [ bib | DOI | http | .pdf | Abstract ]
Smart distribution networks shall improve the efficiency and reliability of power distribution by intelligently managing the available power and requested load. Such intelligent power networks pose challenges for information and communication technology (ICT). Their design requires a holistic assessment of traditional power system topology and ICT architecture. Existing analysis approaches focus on analyzing the power networks components separately. For example, communication simulation provides failure data for communication links, while power analysis makes predictions about the stability of the traditional power grid. However, these insights are not combined to provide a basis for design decisions for future smart distribution networks. In this paper, we describe a common model-driven analysis framework for smart distribution networks based on the Common Information Model (CIM). This framework provides scalable analysis of large smart distribution networks by supporting analyses on different levels of abstraction. Furthermore, we apply our framework to holistic survivability analysis. We map the CIM on a survivability model to enable assessing design options with respect to the achieved survivability improvement. We demonstrate our approach by applying the mapping transformation in a case study based on a real distribution circuit. We conclude by evaluating the survivability impact of three investment options.
[26] Daniel S. Menasché, Rosa Maria Meri Leão, Edmundo de Souza e Silva, Alberto Avritzer, Sindhu Suresh, Kishor Trivedi, Raymond A. Marie, Lucia Happe, and Anne Koziolek. Survivability analysis of power distribution in smart grids with active and reactive power modeling. In SIGMETRICS Performance Evaluation Review, Martin Arlitt, Niklas Carlsson, and Nidhi Hegde, editors, 2012, volume 40, pages 53-57. ACM, New York, NY, USA. 2012, Special issue on the 2012 GreenMetrics workshop. [ bib | DOI | http | .pdf ]
[27] Daniel Dominguez Gouvêa, Cyro Muniz, Gilson Pinto, Alberto Avritzer, Rosa Maria Meri Leão, Edmundo de Souza e Silva, Morganna Carmem Diniz, Luca Berardinelli, Julius C. B. Leite, Daniel Mossé, Yuanfang Cai, Mike Dalton, Lucia Kapova, and Anne Koziolek. Experience building non-functional requirement models of a complex industrial architecture. In Proceedings of the second joint WOSP/SIPEW international conference on Performance engineering (ICPE 2011), Samuel Kounev, Vittorio Cortellessa, Raffaela Mirandola, and David J. Lilja, editors, Karlsruhe, Germany, 2011, pages 43-54. ACM, New York, NY, USA. 2011. [ bib | DOI | http | .pdf ]
[28] Anne Koziolek, Heiko Koziolek, and Ralf Reussner. PerOpteryx: automated application of tactics in multi-objective software architecture optimization. In Joint proceedings of the Seventh International ACM SIGSOFT Conference on the Quality of Software Architectures and the 2nd ACM SIGSOFT International Symposium on Architecting Critical Systems (QoSA-ISARCS 2011), Ivica Crnkovic, Judith A. Stafford, Dorina C. Petriu, Jens Happe, and Paola Inverardi, editors, Boulder, Colorado, USA, 2011, pages 33-42. ACM, New York, NY, USA. 2011. [ bib | DOI | http | .pdf | Abstract ]
Designing software architectures that exhibit a good trade-off between multiple quality attributes is hard. Even with a given functional design, many degrees of freedom in the software architecture (e.g. component deployment or server configuration) span a large design space. In current practice, software architects try to find good solutions manually, which is time-consuming, can be error-prone and can lead to suboptimal designs. We propose an automated approach guided by architectural tactics to search the design space for good solutions. Our approach applies multi-objective evolutionary optimization to software architectures modelled with the Palladio Component Model. Software architects can then make well-informed trade-off decisions and choose the best architecture for their situation. To validate our approach, we applied it to the architecture models of two systems, a business reporting system and an industrial control system from ABB. The approach was able to find meaningful trade-offs leading to significant performance improvements or costs savings. The novel use of tactics decreased the time needed to find good solutions by up to 80%.
[29] Anne Koziolek, Qais Noorshams, and Ralf Reussner. Focussing multi-objective software architecture optimization using quality of service bounds. In Models in Software Engineering, Workshops and Symposia at MODELS 2010, Oslo, Norway, October 3-8, 2010, Reports and Revised Selected Papers, J. Dingel and A. Solberg, editors, 2011, volume 6627 of Lecture Notes in Computer Science, pages 384-399. Springer-Verlag Berlin Heidelberg. 2011. [ bib | DOI | http | .pdf | Abstract ]
Quantitative prediction of non-functional properties, such as performance, reliability, and costs, of software architectures supports systematic software engineering. Even though there usually is a rough idea on bounds for quality of service, the exact required values may be unclear and subject to trade-offs. Designing architectures that exhibit such good trade-off between multiple quality attributes is hard. Even with a given functional design, many degrees of freedom in the software architecture (e.g. component deployment or server configuration) span a large design space. Automated approaches search the design space with multi-objective metaheuristics such as evolutionary algorithms. However, as quality prediction for a single architecture is computationally expensive, these approaches are time consuming. In this work, we enhance an automated improvement approach to take into account bounds for quality of service in order to focus the search on interesting regions of the objective space, while still allowing trade-offs after the search. We compare two different constraint handling techniques to consider the bounds. To validate our approach, we applied both techniques to an architecture model of a component-based business information system. We compared both techniques to an unbounded search in 4 scenarios. Every scenario was examined with 10 optimization runs, each investigating around 1600 architectural candidates. The results indicate that the integration of quality of service bounds during the optimization process can improve the quality of the solutions found, however, the effect depends on the scenario, i.e. the problem and the quality requirements. The best results were achieved for costs requirements: The approach was able to decrease the time needed to find good solutions in the interesting regions of the objective space by 25% on average.
[30] Anne Koziolek and Ralf Reussner. Towards a generic quality optimisation framework for component-based system models. In Proceedings of the 14th international ACM Sigsoft symposium on Component based software engineering, Ivica Crnkovic, Judith A. Stafford, Antonia Bertolino, and Kendra M. L. Cooper, editors, Boulder, Colorado, USA, 2011, CBSE '11, pages 103-108. ACM, New York, NY, USA, New York, NY, USA. 2011. [ bib | DOI | http | .pdf | Abstract ]
Designing component-based systems (CBS) that exhibit a good trade-off between multiple quality criteria is hard. Even after functional design, many remaining degrees of freedom of different types (e.g. component allocation, component selection, server configuration) in the CBS span a large, discontinuous design space. Automated approaches have been proposed to optimise CBS models, but they only consider a limited set of degrees of freedom, e.g. they only optimise the selection of components without considering the allocation, or vice versa. We propose a flexible and extensible formulation of the design space for optimising any CBS model for a number of quality properties and an arbitrary number of degrees of freedom. With this design space formulation, a generic quality optimisation framework that is independent of the used CBS metamodel can apply multi-objective metaheuristic optimisation such as evolutionary algorithms.
[31] Heiko Koziolek, Bastian Schlich, Carlos Bilich, Roland Weiss, Steffen Becker, Klaus Krogmann, Mircea Trifu, Raffaela Mirandola, and Anne Koziolek. An industrial case study on quality impact prediction for evolving service-oriented software. In Proceeding of the 33rd international conference on Software engineering (ICSE 2011), Software Engineering in Practice Track, Richard N. Taylor, Harald Gall, and Nenad Medvidovic, editors, Waikiki, Honolulu, HI, USA, 2011, pages 776-785. ACM, New York, NY, USA. 2011, Acceptance Rate: 18% (18/100). [ bib | DOI | http | Abstract ]
Systematic decision support for architectural design decisions is a major concern for software architects of evolving service-oriented systems. In practice, architects often analyse the expected performance and reliability of design alternatives based on prototypes or former experience. Modeldriven prediction methods claim to uncover the tradeoffs between different alternatives quantitatively while being more cost-effective and less error-prone. However, they often suffer from weak tool support and focus on single quality attributes. Furthermore, there is limited evidence on their effectiveness based on documented industrial case studies. Thus, we have applied a novel, model-driven prediction method called Q-ImPrESS on a large-scale process control system consisting of several million lines of code from the automation domain to evaluate its evolution scenarios. This paper reports our experiences with the method and lessons learned. Benefits of Q-ImPrESS are the good architectural decision support and comprehensive tool framework, while one drawback is the time-consuming data collection.
[32] Catia Trubiani and Anne Koziolek. Detection and solution of software performance antipatterns in Palladio architectural models. In Proceeding of the second joint WOSP/SIPEW international conference on Performance engineering, Samuel Kounev, Vittorio Cortellessa, Raffaela Mirandola, and David J. Lilja, editors, Karlsruhe, Germany, 2011, ICPE '11, pages 19-30. ACM, New York, NY, USA. 2011, ICPE best paper award. [ bib | DOI | http | .pdf | Abstract ]
Antipatterns are conceptually similar to patterns in that they document recurring solutions to common design problems. Performance Antipatterns document, from a performance perspective, common mistakes made during software development as well as their solutions. The definition of performance antipatterns concerns software properties that can include static, dynamic, and deployment aspects. Currently, such knowledge is only used by domain experts; the problem of automatically detecting and solving antipatterns within an architectural model has not been experimented yet. In this paper we present an approach to automatically detect and solve software performance antipatterns within the Palladio architectural models: the detection of an antipattern provides a software performance feedback to designers, since it suggests the architectural alternatives that actually allow to overcome specific performance problems. We implemented the approach and a case study is presented to demonstrate its validity. The system performance under study has been improved of 50% by applying antipatterns' solutions.
[33] Vittorio Cortellessa, Anne Martens, Ralf Reussner, and Catia Trubiani. A process to effectively identify guilty performance antipatterns. In Fundamental Approaches to Software Engineering, 13th International Conference, FASE 2010, David Rosenblum and Gabriele Taentzer, editors, Paphos, Cyprus, 2010, pages 368-382. Springer-Verlag Berlin Heidelberg. 2010. [ bib | DOI | http | .pdf | Abstract ]
The problem of interpreting the results of software performance analysis is very critical. Software developers expect feedbacks in terms of architectural design alternatives (e.g., split a software component in two components and re-deploy one of them), whereas the results of performance analysis are either pure numbers (e.g. mean values) or functions (e.g. probability distributions). Support to the interpretation of such results that helps to fill the gap between numbers/functions and software alternatives is still lacking. Performance antipatterns can play a key role in the search of performance problems and in the formulation of their solutions. In this paper we tackle the problem of identifying, among a set of detected performance antipatterns, the ones that are the real causes of problems (i.e. the guilty ones). To this goal we introduce a process to elaborate the performance analysis results and to score performance requirements, model entities and performance antipatterns. The cross observation of such scores allows to classify the level of guiltiness of each antipattern. An example modeled in Palladio is provided to demonstrate the validity of our approach by comparing the performance improvements obtained after removal of differently scored antipatterns.
[34] Lucia Kapova, Barbora Zimmerova, Anne Martens, Jens Happe, and Ralf H. Reussner. State dependence in performance evaluation of component-based software systems. In Proceedings of the 1st Joint WOSP/SIPEW International Conference on Performance Engineering (WOSP/SIPEW '10), San Jose, California, USA, 2010, pages 37-48. ACM, New York, NY, USA. 2010. [ bib | DOI | .pdf | Abstract ]
Performance prediction and measurement approaches for component-based software systems help software architects to evaluate their systems based on component performance specifications created by component developers. Integrating classical performance models such as queueing networks, stochastic Petri nets, or stochastic process algebras, these approaches additionally exploit the benefits of component-based software engineering, such as reuse and division of work. Although researchers have proposed many approaches in this direction during the last decade, none of them has attained widespread industrial use. On this basis, we have conducted a comprehensive state-of-the-art survey of more than 20 of these approaches assessing their applicability. We classified the approaches according to the expressiveness of their component performance modelling languages. Our survey helps practitioners to select an appropriate approach and scientists to identify interesting topics for future research.
[35] Anne Martens, Danilo Ardagna, Heiko Koziolek, Raffaela Mirandola, and Ralf Reussner. A hybrid approach for multi-attribute QoS optimisation in component based software systems. In Research into Practice - Reality and Gaps (Proceedings of the 6th International Conference on the Quality of Software Architectures, QoSA 2010), George Heineman, Jan Kofron, and Frantisek Plasil, editors, 2010, volume 6093 of Lecture Notes in Computer Science, pages 84-101. Springer-Verlag Berlin Heidelberg. 2010. [ bib | DOI | .pdf | .pdf | Abstract ]
Multiple, often conflicting quality of service (QoS) requirements arise when evaluating design decisions and selecting design alternatives of complex component-based software systems. In this scenario, selecting a good solution with respect to a single quality attribute can lead to unacceptable results with respect to the other quality attributes. A promising way to deal with this problem is to exploit multi-objective optimization where the objectives represent different quality attributes. The aim of these techniques is to devise a set of solutions, each of which assures a trade-off between the conflicting qualities. To automate this task, this paper proposes a combined use of analytical optimization techniques and evolutionary algorithms to efficiently identify a significant set of design alternatives, from which an architecture that best fits the different quality objectives can be selected. The proposed approach can lead both to a reduction of development costs and to an improvement of the quality of the final system. We demonstrate the use of this approach on a simple case study.
[36] Anne Martens, Heiko Koziolek, Steffen Becker, and Ralf H. Reussner. Automatically improve software models for performance, reliability and cost using genetic algorithms. In Proceedings of the first joint WOSP/SIPEW international conference on Performance engineering, Alan Adamson, Andre B. Bondi, Carlos Juiz, and Mark S. Squillante, editors, San Jose, California, USA, 2010, WOSP/SIPEW '10, pages 105-116. ACM, New York, NY, USA. 2010. [ bib | DOI | slides | http | .pdf | Abstract ]
Quantitative prediction of quality properties (i.e. extra-functional properties such as performance, reliability, and cost) of software architectures during design supports a systematic software engineering approach. Designing architectures that exhibit a good trade-off between multiple quality criteria is hard, because even after a functional design has been created, many remaining degrees of freedom in the software architecture span a large, discontinuous design space. In current practice, software architects try to find solutions manually, which is time-consuming, can be error-prone and can lead to suboptimal designs. We propose an automated approach to search the design space for good solutions. Starting with a given initial architectural model, the approach iteratively modifies and evaluates architectural models. Our approach applies a multi-criteria genetic algorithm to software architectures modelled with the Palladio Component Model. It supports quantitative performance, reliability, and cost prediction and can be extended to other quantitative quality criteria of software architectures. We validate the applicability of our approach by applying it to an architecture model of a component-based business information system and analyse its quality criteria trade-offs by automatically investigating more than 1200 alternative design candidates.
[37] Qais Noorshams, Anne Martens, and Ralf Reussner. Using quality of service bounds for effective multi-objective software architecture optimization. In Proceedings of the 2nd International Workshop on the Quality of Service-Oriented Software Systems (QUASOSS '10), Oslo, Norway, October 4, 2010, 2010, pages 1:1-1:6. ACM, New York, NY, USA. 2010. [ bib | DOI | http | .pdf ]
[38] Klaus Krogmann, Christian M. Schweda, Sabine Buckl, Michael Kuperberg, Anne Martens, and Florian Matthes. Improved Feedback for Architectural Performance Prediction using Software Cartography Visualizations. In Architectures for Adaptive Systems (Proceedings of QoSA 2009), Raffaela Mirandola, Ian Gorton, and Christine Hofmeister, editors, 2009, volume 5581 of Lecture Notes in Computer Science, pages 52-69. Springer. 2009, Best Paper Award. [ bib | DOI | http | Abstract ]
Software performance engineering provides techniques to analyze and predict the performance (e.g., response time or resource utilization) of software systems to avoid implementations with insufficient performance. These techniques operate on models of software, often at an architectural level, to enable early, design-time predictions for evaluating design alternatives. Current software performance engineering approaches allow the prediction of performance at design time, but often provide cryptic results (e.g., lengths of queues). These prediction results can be hardly mapped back to the software architecture by humans, making it hard to derive the right design decisions. In this paper, we integrate software cartography (a map technique) with software performance engineering to overcome the limited interpretability of raw performance prediction results. Our approach is based on model transformations and a general software visualization approach. It provides an intuitive mapping of prediction results to the software architecture which simplifies design decisions. We successfully evaluated our approach in a quasi experiment involving 41 participants by comparing the correctness of performance-improving design decisions and participants' time effort using our novel approach to an existing software performance visualization.
[39] Anne Martens, Franz Brosch, and Ralf Reussner. Optimising multiple quality criteria of service-oriented software architectures. In Proceedings of the 1st international workshop on Quality of service-oriented software systems (QUASOSS), 2009, pages 25-32. ACM, New York, NY, USA. 2009. [ bib | DOI | .pdf | Abstract ]
Quantitative prediction of quality criteria (i.e. extra-functional properties such as performance, reliability, and cost) of service-oriented architectures supports a systematic software engineering approach. However, various degrees of freedom in building a software architecture span a large, discontinuous design space. Currently, solutions with a good trade-off between multiple quality criteria have to be found manually. We propose an automated approach to search the design space by modifying the architectural models, to improve the architecture with respect to multiple quality criteria, and to find optimal architectural models. The found optimal architectural models can be used as an input for trade-off analyses and thus allow systematic engineering of high-quality software architectures. Using this approach, the design of a high-quality component-based software system is eased for the software architect and thus saves cost and effort. Our approach applies a multi-criteria genetic algorithm to software architectures modelled with the Palladio Component Model (PCM). Currently, the method supports quantitative performance and reliability prediction, but it can be extended to other quality properties such as cost as well.
[40] Anne Martens and Heiko Koziolek. Automatic, model-based software performance improvement for component-based software designs. In Proceedings of the Sixth International Workshop on Formal Engineering approaches to Software Components and Architectures (FESCA 2009), 2009, volume 253(1) of Electronic Notes in Theoretical Computer Science, pages 77 - 93. Elsevier. 2009. [ bib | DOI | .pdf | Abstract ]
Formal performance prediction methods, based on queueing network models, allow evaluating software architectural designs for performance. Existing methods provide prediction results such as response times and throughputs, but do not guide the software architect on how to improve the design. We propose a novel approach to optimise the expected performance of component-based software designs by automatically generating and evaluating design alternatives. The design space spanned by different design options (e.g. available components and configuration options) is systematically explored using metaheuristic search techniques and performance-domain heuristics. The gap between applying formal performance predictions and actually improving the design of a system can thus be closed. This paper presents a formal description and a prototypical implementation of our approach with a proof-of-concept case study.
[41] Anne Martens, Steffen Becker, Heiko Koziolek, and Ralf Reussner. An empirical investigation of the applicability of a component-based performance prediction method. In Proceedings of the 5th European Performance Engineering Workshop (EPEW'08), Palma de Mallorca, Spain, N. Thomas and C. Juiz, editors, 2008, volume 5261 of Lecture Notes in Computer Science, pages 17-31. Springer-Verlag Berlin Heidelberg. 2008. [ bib | DOI | .pdf | Abstract ]
Component-based software performance engineering (CBSPE) methods shall enable software architects to assess the expected response times, throughputs, and resource utilization of their systems already during design. This avoids the violation of performance requirements. Existing approaches for CBSPE either lack tool support or rely on prototypical tools, who have only been applied by their authors. Therefore, industrial applicability of these methods is unknown. On this behalf, we have conducted a controlled experiment involving 19 computer science students, who analysed the performance of two component-based designs using our Palladio performance prediction approach, as an example for a CBSPE method. Our study is the first of its type in this area and shall help to mature CBSPE to industrial applicability. In this paper, we report on results concerning the prediction accuracy achieved by the students and list several lessons learned, which are also relevant for other methods than Palladio.
[42] Anne Martens, Steffen Becker, Heiko Koziolek, and Ralf Reussner. An empirical investigation of the effort of creating reusable models for performance prediction. In Proceedings of the 11th International Symposium on Component-Based Software Engineering (CBSE'08), Karlsruhe, Germany, 2008, volume 5282 of Lecture Notes in Computer Science, pages 16-31. Springer-Verlag Berlin Heidelberg. 2008. [ bib | DOI | .pdf | Abstract ]
Model-based performance prediction methods aim at evaluating the expected response time, throughput, and resource utilisation of a software system at design time, before implementation. Existing performance prediction methods use monolithic, throw-away prediction models or component-based, reusable prediction models. While it is intuitively clear that the development of reusable models requires more effort, the actual higher amount of effort has not been quantified or analysed systematically yet. To study the effort, we conducted a controlled experiment with 19 computer science students who predicted the performance of two example systems applying an established, monolithic method (Software Performance Engineering) as well as our own component-based method (Palladio). The results show that the effort of model creation with Palladio is approximately 1.25 times higher than with SPE in our experimental setting, with the resulting models having comparable prediction accuracy. Therefore, in some cases, the creation of reusable prediction models can already be justified, if they are reused at least once.
[43] Anne Martens and Heiko Koziolek. Performance-oriented design space exploration. In Proceedings of the Thirteenth International Workshop on Component-Oriented Programming (WCOP'08), Karlsruhe, Germany, 2008, Interner Bericht / Universität Karlsruhe, Fakultät für Informatik ; 2008,12, pages 25-32. [ bib | .pdf | Abstract ]
Architectural models of component-based software systems are evaluated for functional properties and/or extrafunctional properties (e.g. by doing performance predictions). However, after getting the results of the evaluations and recognising that requirements are not met, most existing approaches leave the software architect alone with finding new alternatives to her current design (e.g. by changing the selection of components, the configuration of components and containers, the sizing). We propose a novel approach to automatically generate and assess performance-improving design alternatives for componentbased software systems based on performance analyses of the software architecture. First, the design space spanned by different design options (e.g. available components, configuration options) is systematically explored using metaheuristic search techniques. Second, new architecture candidates are generated based on detecting anti-patterns in the initial architecture. Using this approach, the design of a high-quality component-based software system is eased for the software architect. First, she needs less manual effort to find good design alternatives. Second, good design alternatives can be uncovered that the software architect herself would have overlooked.

Books, Edited Proceedings, Book Chapters and Invited Talk Abstracts (not peer-reviewed)

[1] Lukas Esterle, Kirstie L. Bellman, Steffen Becker, Anne Koziolek, Christopher Landauer, and Peter Lewis. Assessing Self-awareness, pages 465-481. Springer International Publishing, Cham, 2017. [ bib | DOI | http ]
[2] Ralf H. Reussner, Steffen Becker, Jens Happe, Robert Heinrich, Anne Koziolek, Heiko Koziolek, Max Kramer, and Klaus Krogmann. Modeling and Simulating Software Architectures - The Palladio Approach. MIT Press, Cambridge, MA, October 2016. [ bib | http | Abstract ]
Too often, software designers lack an understanding of the effect of design decisions on such quality attributes as performance and reliability. This necessitates costly trial-and-error testing cycles, delaying or complicating rollout. This book presents a new, quantitative architecture simulation approach to software design, which allows software engineers to model quality of service in early design stages. It presents the first simulator for software architectures, Palladio, and shows students and professionals how to model reusable, parametrized components and configured, deployed systems in order to analyze service attributes. The text details the key concepts of Palladio's domain-specific modeling language for software architecture quality and presents the corresponding development stage. It describes how quality information can be used to calibrate architecture models from which detailed simulation models are automatically derived for quality predictions. Readers will learn how to approach systematically questions about scalability, hardware resources, and efficiency. The text features a running example to illustrate tasks and methods as well as three case studies from industry. Each chapter ends with exercises, suggestions for further reading, and "takeaways" that summarize the key points of the chapter. The simulator can be downloaded from a companion website, which offers additional material. The book can be used in graduate courses on software architecture, quality engineering, or performance engineering. It will also be an essential resource for software architects and software engineers and for practitioners who want to apply Palladio in industrial settings.
[3] Diwakar Krishnamurthy, Anne Koziolek, and Nidhi Hegde, editors. SIGMETRICS Performance Evaluation Review, Special Issue on Challenges in Software Performance, volume 43(4). ACM, New York, NY, USA, March 2016. [ bib | http ]
[4] Alberto Avritzer, Lucia Happe, Anne Koziolek, Daniel Sadoc Menasche, Sindhu Suresh, and Jose Yallouz. Scalable Assessment and Optimization of Power Distribution Automation Networks, pages 321-340. Springer International Publishing, Cham, Switzerland, 2016. [ bib | DOI | http ]
[5] Anne Koziolek. Interplay of design time optimization and run time optimization (talk abstract). In Model-driven Algorithms and Architectures for Self-Aware Computing Systems (Dagstuhl Seminar 15041), Dagstuhl Reports, Samuel Kounev, Xiaoyun Zhu, Jeffrey O. Kephart, and Marta Kwiatkowska, editors, volume 5, page 183. Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik, Dagstuhl, Germany, 2015. Issue 1. [ bib | DOI | http ]
[6] Lucia Happe and Anne Koziolek. A common analysis framework for smart distribution networks applied to security and survivability analysis (talk abstract). In Randomized Timed and Hybrid Models for Critical Infrastructures (Dagstuhl Seminar 14031), Dagstuhl Reports, Erika Ábrahám, Alberto Avritzer, Anne Remke, and William H. Sanders, editors, volume 4, pages 45-46. Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik, Dagstuhl, Germany, 2014. Issue 1. [ bib | DOI | http ]
[7] Anne Koziolek, Robert L. Nord, and Philippe Kruchten, editors. QoSA '13: Proceedings of the 9th International ACM Sigsoft Conference on Quality of Software Architectures. ACM, New York, NY, USA, New York, NY, USA, 2013. 594131. [ bib | http ]
[8] Anne Koziolek. Automated Improvement of Software Architecture Models for Performance and Other Quality Attributes, volume 7 of The Karlsruhe Series on Software Design and Quality. KIT Scientific Publishing, Karlsruhe, 2013. [ bib | DOI | http | http | Abstract ]
Quality attributes, such as performance or reliability, are crucial for the success of a software system and largely influenced by the software architecture. Their quantitative prediction supports systematic, goal-oriented software design and forms a base of an engineering approach to software design. This thesis proposes a method and tool to automatically improve component-based software architecture (CBA) models based on such quantitative quality prediction techniques.
[9] Ralf Reussner, Steffen Becker, Anne Koziolek, and Heiko Koziolek. Perspectives on the Future of Software Engineering, chapter An Empirical Investigation of the Component-Based Performance Prediction Method Palladio, pages 191-207. Springer Berlin Heidelberg, 2013. [ bib | DOI | http | .pdf ]
[10] Norbert Seyff and Anne Koziolek, editors. Modelling and Quality in Requirements Engineering: Essays Dedicated to Martin Glinz on the Occasion of His 60th Birthday. Monsenstein and Vannerdat, Münster, Germany, 2012. [ bib | http | Abstract ]
“Modeling and Quality in Requirements Engineering” is the Festschrift dedicated to Martin Glinz on the occasion of his 60th birthday. Colleagues and friends have sent contributions to honor his achievements in the field of Software and Requirements Engineering. The contributions address specific topics in Martin's main research areas of modeling and quality in requirements engineering. Examples include risk-driven requirements engineering, non-functional requirements and lightweight requirements modeling. Furthermore, they cover related topics such as quality of business processes, SOA, process modeling and testing. Reminiscences and congratulations from fellow researchers and friends conclude the Festschrift.

Other, not peer-reviewed publications

[1] Matthias Galster, Mehdi Mirakhorli, and Anne Koziolek. Twin peaks goes agile. SIGSOFT Softw. Eng. Notes, 40(5):47-49, 2015, ACM, New York, NY, USA. [ bib | DOI | http | .pdf ]

Technical Reports

[1] Andreas Brunnert, André van Hoorn, Felix Willnecker, Alexandru Danciu, Wilhelm Hasselbring, Christoph Heger, Nikolas Roman Herbst, Pooyan Jamshidi, Reiner Jung, Jóakim von Kistowski, Anne Koziolek, Johannes Kroß, Simon Spinner, Christian Vögele, Jürgen Walter, and Alexander Wert. Performance-oriented devops: A research agenda. Technical Report SPEC-RG-2015-01, SPEC Research Group - DevOps Performance Working Group, Standard Performance Evaluation Corporation (SPEC), 2015. [ bib | http ]
[2] Christian Stier, Henning Groenda, and Anne Koziolek. Towards Modeling and Analysis of Power Consumption of Self-Adaptive Software Systems in Palladio. Technical report, University of Stuttgart, Faculty of Computer Science, Electrical Engineering, and Information Technology, November 2014. [ bib | slides | .pdf ]
[3] Ralf Reussner, Steffen Becker, Erik Burger, Jens Happe, Michael Hauck, Anne Koziolek, Heiko Koziolek, Klaus Krogmann, and Michael Kuperberg. The Palladio Component Model. Technical report, KIT, Fakultät für Informatik, Karlsruhe, 2011. [ bib | http | Abstract ]
This report introduces the Palladio Component Model (PCM), a novel software component model for business information systems, which is specifically tuned to enable model-driven quality-of-service (QoS, i.e., performance and reliability) predictions. The PCMs goal is to assess the expected response times, throughput, and resource utilization of component-based software architectures during early development stages. This shall avoid costly redesigns, which might occur after a poorly designed architecture has been implemented. Software architects should be enabled to analyse different architectural design alternatives and to support their design decisions with quantitative results from performance or reliability analysis tools.
[4] Franz Brosch, Henning Groenda, Lucia Kapova, Klaus Krogmann, Michael Kuperberg, Anne Martens, Pierre Parrend, Ralf Reussner, Johannes Stammel, and Emre Taspolatoglu. Software-industrialisierung. Technical report, Fakultät für Informatik, Universität Karlsruhe, Karlsruhe, 2009. Interner Bericht. [ bib | http | Abstract ]
Die Industrialisierung der Software-Entwicklung ist ein zur Zeit sehr stark diskutiertes Thema. Es geht dabei vor allem um die Efizienzsteigerung durch die Steigerung des Standardisierungsgrades, des Automatisierungsgrades sowie eine Erhöhung der Arbeitsteilung. Dies wirkt sich einerseits auf die den Software- Systemen zu Grunde liegenden Architekturen, aber auch auf die Entwicklungsprozesse aus. So sind Service-orientierte Architekturen ein Beispiel für eine gesteigerte Standardisierung innerhalb von Software-Systemen. Es ist zu berücksichtigen, dass sich die Software-Branche von den klassischen produzierenden Industriezweigen dadurch unterscheidet, dass Software ein immaterielles Produkt ist und so ohne hohe Produktionskosten beliebig oft vervielfältigt werden kann. Trotzdem lassen sich viele Erkenntnisse aus den klassischen Industriezweigen auf die Software-Technik übertragen. Die Inhalte dieses Berichts stammen hauptsächlich aus dem Seminar " Software- Industrialisierung welches sich mit der Professionalisierung der Software- Entwi- cklung und des Software-Entwurfs beschäftigte. Während die klassische Software-Entwicklung wenig strukturiert ist und weder im Bezug auf Reproduzierbarkeit oder Qualitätssicherung erhöhten Anforderungen genügt, befindet sich die Software-Entwick- lung im Rahmen der Industrialisierung in einemWandel. Dazu zählen arbeitsteiliges Arbeiten, die Einführung von Entwicklungsprozessen mit vorhersagbaren Eigenschaften (Kosten, Zeitbedarf, ...), und in der Folge die Erstellung von Produkten mit garantierbaren Eigenschaften. Das Themenspektrum des Seminars umfasste dabei unter anderem: * Komponentenbasierte Software-Architekturen * Modellgetriebene Softwareentwicklung: Konzepte und Technologien * Industrielle Softwareentwicklungsprozesse und deren Bewertung Das Seminar wurde wie eine wissenschaftliche Konferenz organisiert: Die Einreichungen wurden in einem zweistufigen Peer-Review-Verfahren begutachtet. In der ersten Stufe wurde eine Begutachtung der studentischen Arbeiten durch Kommilitonen durchgeführt, in der zweiten Stufe eine Begutachtung durch die Betreuer. In verschiedenen Sessions wurden die Artikel wie bei einer Konferenz präsentiert. Die besten Beiträge wurden durch zwei Best Paper Awards ausgezeichnet. Diese gingen an Tom Beyer für seine Arbeit Realoptionen für Entscheidungen in der Software-Entwicklung, sowie an Philipp Meier für seine Arbeit Assessment Methods for Software Product Lines. Ergänzt wurden die Vorträge der Seminarteilnehmer durch zwei eingeladene Vorträge: Collin Rogowski von der 1&1 Internet AG stellte den agilen Softwareentwicklungsprozess beim Mail-Produkt GMX.COM vor. Heiko Koziolek, Wolfgang Mahnke und Michaela Saeftel von ABB referierten über das Thema Software Product Line Engineering anhand der bei ABB entwickelten Robotik-Applikationen.
[5] Franz Brosch, Thomas Goldschmidt, Henning Groenda, Lucia Kapova, Klaus Krogmann, Michael Kuperberg, Anne Martens, Christoph Rathfelder, Ralf Reussner, and Johannes Stammel. Software-industrialisierung. Interner bericht, Universität Karlsruhe, Fakultät für Informatik, Institut für Programmstrukturen und Datenorganisation, Karlsruhe, 2008. [ bib | http | Abstract ]
Die Industrialisierung der Software-Entwicklung ist ein zurzeit sehr stark diskutiertes Thema. Es geht dabei vor allem um die Efizienzsteigerung durch die Steigerung des Standardisierungsgrades, des Automatisierungsgrades sowie eine Erhöhung der Arbeitsteilung. Dies wirkt sich einerseits auf die den Software- Systemen zu Grunde liegenden Architekturen aber auch auf die Entwicklungsprozesse aus. So sind service-orientierte Architekturen ein Beispiel für eine gesteigerte Standardisierung innerhalb von Software-Systemen. Es ist zu berücksichtigen, dass sich die Software-Branche von den klassischen produzierenden Industriezweigen dadurch unterscheidet, dass Software ein immaterielles Produkt ist und so ohne hohe Produktionskosten beliebig oft vervielfältigt werden kann. Trotzdem lassen sich viele Erkenntnisse aus den klassischen Industriezweigen auf die Software-Technik übertragen. Die Inhalte dieses Berichts stammen hauptsächlich aus dem Seminar "Software- Industrialisierung welches sich mit der Professionalisierung der Software- Entwicklung und des Software-Entwurfs beschäftigte. Während die klassische Software-Entwicklung wenig strukturiert ist und weder im Bezug auf Reproduzierbarkeit oder Qualitätssicherung erhöhten Anforderungen genügt, befindet sich die Software-Entwicklung im Rahmen der Industrialisierung in einem Wandel. Dazu zählen arbeitsteiliges Arbeiten, die Einführung von Entwicklungsprozessen mit vorhersagbaren Eigenschaften (Kosten, Zeitbedarf, ...) und in der Folge die Erstellung von Produkten mit garantierbaren Eigenschaften. Das Themenspektrum des Seminars umfasste dabei unter anderem: * Software-Architekturen * Komponentenbasierte Software-Entwicklung * Modellgetriebene Entwicklung * Berücksichtigung von Qualitätseigenschaften in Entwicklungsprozessen Das Seminar wurde wie eine wissenschaftliche Konferenz organisiert: Die Einreichungen wurden in einem zweistufigen Peer-Review-Verfahren begutachtet. In der ersten Stufe wurde eine Begutachtung der studentischen Arbeiten durch Kommilitonen durchgeführt, in der zweiten Stufe eine Begutachtung durch die Betreuer. In verschiedenen Sessions wurden die Artikel an zwei Konferenztagen präsentiert. Der beste Beitrag wurde durch einen Best Paper Award ausgezeichnet. Dieser ging an Benjamin Klatt für seine Arbeit Software Extension Mechanisms, dem hiermit noch einmal herzlich zu dieser herausragenden Leistung gratuliert wird. Ergänzend zu den Vorträgen der Seminarteilnehmer wurde ein eingeladener Vortrag gehalten. Herr Florian Kaltner und Herr Tobias Pohl vom IBM-Entwicklungslabor gaben dabei dankenswerterweise in ihrem Vortrag Einblicke in die Entwicklung von Plugins für Eclipse sowie in die Build-Umgebung der Firmware für die zSeries Mainframe-Server.

Theses

[1] Anne Koziolek. Automated Improvement of Software Architecture Models for Performance and Other Quality Attributes. PhD thesis, Institut für Programmstrukturen und Datenorganisation (IPD), Karlsruher Institut für Technologie, Karlsruhe, Germany, 2011. [ bib | http | .pdf ]
[2] Anne Martens. Empirical Validation of the Model-driven Performance Prediction Approach Palladio. Master's thesis, Carl-von-Ossietzky Universität Oldenburg, 2007. one version with appendix, one short version under '.pdf' links. [ bib | .pdf | .pdf | Abstract ]
To estimate the consequences of design decisions is a crucial element of an engineering discipline. Model-based performance prediction approaches target the estimation of a system's performance at design time. Next to accuracy, the approaches also need to be validated for their applicability to be usable in practice. The applicability of the model-based performance prediction approach Palladio was never validated before. Previous case studies validating Palladio were concerned with the accuracy of the predictions in comparison to measurements. In this thesis, I empirically validated the applicability of Palladio and, for comparison, of the well-known performance prediction approach SPE. While Palladio has the notion of a component, which leads to reusable prediction models, SPE makes not use of any componentisation of the system to be analysed. For the empirical validation, I conducted an empirical study with 19 computer science students. The study was designed as a controlled experiment to achieve a high validity of the results. The results showed that both approaches were applicable, although both have specific problems. Furthermore, it was found that the duration of conducting a prediction using Palladio was significantly higher than duration using SPE, however, the influence of potential reuse of the Palladio models was excluded by the experiment design.
[3] Anne Martens. Empirical Validation and Comparison of the Model-Driven Performance Prediction Techniques of CB-SPE and Palladio. Carl-von-Ossietzky Universität Oldenburg, 2005. individual project thesis (similar to a Bachelor's thesis). [ bib | .pdf | Abstract ]
For the design of component based systems, it is important to guarantee non-functional attributes before actually composing a system. Performance usually is a crucial property of a software system, for safety or usability reasons. Several approaches to predict performance characteristics of a component based system in an early stage of development have been introduced in recent times. However, for an engineering discipline, not only the propose of techniques is needed, but also empirical studies of their applicability. This work empirically compares and evaluates two approaches to early predict the performance of component based software systems. The empirical study is conducted in form of a case study, although attempts are made to achieve a good generalizability. The results attest the CB-SPE technique a good applicability, although some problems occured. The Palladio technique has less good results. Here, there have been problems with the specification of the distribution functions.