Home | Sitemap | Index | Contact | Legals | KIT

Publications of Michael Hauck

Books/Book Chapters and edited Proceedings

Refereed journal articles

[1] Michael Hauck, Michael Kuperberg, Nikolaus Huber, and Ralf Reussner. Deriving performance-relevant infrastructure properties through model-based experiments with ginpex. Software & Systems Modeling, pages 1-21, 2013, Springer-Verlag. [ bib | DOI | http | Abstract ]
To predict the performance of an application, it is crucial to consider the performance of the underlying infrastructure. Thus, to yield accurate prediction results, performance-relevant properties and behaviour of the infrastructure have to be integrated into performance models. However, capturing these properties is a cumbersome and error-prone task, as it requires carefully engineered measurements and experiments. Existing approaches for creating infrastructure performance models require manual coding of these experiments, or ignore the detailed properties in the models. The contribution of this paper is the Ginpex approach, which introduces goal-oriented and model-based specification and generation of executable performance experiments for automatically detecting and quantifying performance-relevant infrastructure properties. Ginpex provides a metamodel for experiment specification and comes with predefined experiment templates that provide automated experiment execution on the target platform and also automate the evaluation of the experiment results. We evaluate Ginpex using three case studies, where experiments are executed to quantify various infrastructure properties.
[2] Heiko Koziolek, Bastian Schlich, Steffen Becker, and Michael Hauck. Performance and reliability prediction for evolving service-oriented software systems. Empirical Software Engineering, pages 1-45, 2012, Springer US. [ bib | DOI | http ]

Refereed conference/Workshop papers

[1] Nikolaus Huber, Marcel von Quast, Fabian Brosig, Michael Hauck, and Samuel Kounev. A Method for Experimental Analysis and Modeling of Virtualization Performance Overhead. In Cloud Computing and Services Science, Ivan Ivanov, Marten van Sinderen, and Boris Shishkov, editors, Service Science: Research and Innovations in the Service Economy, pages 353-370. Springer, New York, 2012. [ bib | DOI | http | .pdf ]
[2] Max E. Kramer, Zoya Durdik, Michael Hauck, Jörg Henss, Martin Küster, Philipp Merkle, and Andreas Rentschler. Extending the Palladio Component Model using Profiles and Stereotypes. In Palladio Days 2012 Proceedings (appeared as technical report), Steffen Becker, Jens Happe, Anne Koziolek, and Ralf Reussner, editors, 2012, Karlsruhe Reports in Informatics ; 2012,21, pages 7-15. KIT, Faculty of Informatics, Karlsruhe. 2012. [ bib | http | http | Abstract ]
Extending metamodels to account for new concerns has a major influence on existing instances, transformations and tools. To minimize the impact on existing artefacts, various techniques for extending a metamodel are available, for example, decorators and annotations. The Palladio Component Model (PCM) is a metamodel for predicting quality of component-based software architectures. It is continuously extended in order to be applicable in originally unexpected domains and settings. Nevertheless, a common extension approach for the PCM and for the tools built on top of it is still missing. In this paper, we propose a lightweight extension approach for the PCM based on profiles and stereotypes to close this gap. Our approach is going to reduce the development effort for new PCM extensions by handling both the definition and use of extensions in a generic way. Due to a strict separation of the PCM, its extension domains, and the connections in between, the approach also increases the interoperability of PCM extensions.
[3] Michael Hauck, Michael Kuperberg, Nikolaus Huber, and Ralf Reussner. Ginpex: Deriving Performance-relevant Infrastructure Properties Through Goal-oriented Experiments. In Proceedings of the 7th ACM SIGSOFT International Conference on the Quality of Software Architectures (QoSA 2011), June 20-24, 2011, pages 53-62. ACM, New York, NY, USA. June 2011. [ bib | DOI | www: | .pdf ]
[4] Nikolaus Huber, Marcel von Quast, Michael Hauck, and Samuel Kounev. Evaluating and Modeling Virtualization Performance Overhead for Cloud Environments. In Proceedings of the 1st International Conference on Cloud Computing and Services Science (CLOSER 2011), Noordwijkerhout, The Netherlands, May 7-9, 2011, pages 563 - 573. SciTePress. May 2011, Acceptance Rate: 18/164 = 10.9%, Best Paper Award. [ bib | http | .pdf | Abstract ]
Due to trends like Cloud Computing and Green IT, virtualization technologies are gaining increasing importance. They promise energy and cost savings by sharing physical resources, thus making resource usage more efficient. However, resource sharing and other factors have direct effects on system performance, which are not yet well-understood. Hence, performance prediction and performance management of services deployed in virtualized environments like public and private Clouds is a challenging task. Because of the large variety of virtualization solutions, a generic approach to predict the performance overhead of services running on virtualization platforms is highly desirable. In this paper, we present experimental results on two popular state-of-the-art virtualization platforms, Citrix XenServer 5.5 and VMware ESX 4.0, as representatives of the two major hypervisor architectures. Based on these results, we propose a basic, generic performance prediction model for the two different types of hypervisor architectures. The target is to predict the performance overhead for executing services on virtualized platforms.
[5] Michael Hauck, Jens Happe, and Ralf Reussner. Towards Performance Prediction for Cloud Computing Environments Based on Goal-oriented Measurements. In Proceedings of the 1st International Conference on Cloud Computing and Services Science (CLOSER 2011), 2011, pages 616-622. SciTePress. 2011. [ bib | http | .pdf ]
[6] Steffen Becker, Michael Hauck, Mircea Trifu, Klaus Krogmann, and Jan Kofron. Reverse Engineering Component Models for Quality Predictions. In Proceedings of the 14th European Conference on Software Maintenance and Reengineering, European Projects Track, 2010, pages 199-202. IEEE. 2010. [ bib | .pdf | Abstract ]
Legacy applications are still widely spread. If a need to change deployment or update its functionality arises, it becomes difficult to estimate the performance impact of such modifications due to absence of corresponding models. In this paper, we present an extendable integrated environment based on Eclipse developed in the scope of the Q-ImPrESS project for reverse engineering of legacy applications (in C/C++/Java). The Q-ImPrESS project aims at modeling quality attributes at an architectural level and allows for choosing the most suitable variant for implementation of a desired modification. The main contributions of the project include i) a high integration of all steps of the entire process into a single tool, a beta version of which has been already successfully tested on a case study, ii) integration of multiple research approaches to performance modeling, and iii) an extendable underlying meta-model for different quality dimensions.
[7] Jens Happe, Henning Groenda, Michael Hauck, and Ralf H. Reussner. A Prediction Model for Software Performance in Symmetric Multiprocessing Environments. In Proceedings of the 2010 7th International Conference on the Quantitative Evaluation of Systems, 2010, QEST '10, pages 59-68. IEEE Computer Society, Washington, DC, USA. 2010. [ bib | DOI | http | .pdf | Abstract ]
The broad introduction of multi-core processors made symmetric multiprocessing (SMP) environments mainstream. The additional cores can significantly increase software performance. However, their actual benefit depends on the operating system scheduler's capabilities, the system's workload, and the software's degree of concurrency. The load distribution on the available processors (or cores) strongly influences response times and throughput of software applications. Hence, understanding the operating system scheduler's influence on performance and scalability is essential for the accurate prediction of software performance (response time, throughput, and resource utilisation). Existing prediction approaches tend to approximate the influence of operating system schedulers by abstract policies such as processor sharing and its more sophisticated extensions. However, these abstractions often fail to accurately capture software performance in SMP environments. In this paper, we present a performance Model for general-purpose Operating System Schedulers (MOSS). It allows analyses of software performance taking the influences of schedulers in SMP environments into account. The model is defined in terms of timed Coloured Petri Nets and predicts the effect of different operating system schedulers (e.g., Windows 7, Vista, Server 2003, and Linux 2.6) on software performance. We validated the prediction accuracy of MOSS in a case study using a business information system. In our experiments, the deviation of predictions and measurements was below 10% in most cases and did not exceed 30%.
[8] Michael Hauck, Jens Happe, and Ralf H. Reussner. Automatic Derivation of Performance Prediction Models for Load-balancing Properties Based on Goal-oriented Measurements. In Proceedings of the 18th IEEE International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems (MASCOTS'10), 2010, pages 361-369. IEEE Computer Society. 2010. [ bib | DOI | http | Abstract ]
In symmetric multiprocessing environments, the performance of a software system heavily depends on the application's parallelism, the scheduling and load-balancing policies of the operating system, and the infrastructure it is running on. The scheduling of tasks can influence the response time of an application by several orders of magnitude. Thus, detailed models of the operating system scheduler are essential for accurate performance predictions. However, building such models for schedulers and including them into performance prediction models involves a lot of effort. For this reason, simplified scheduler models are used for the performance evaluation of business information systems in general. In this work, we present an approach to derive load-balancing properties of general-purpose operating system (GPOS) schedulers automatically. Our approach uses goal-oriented measurements to derive performance models based on observations. Furthermore, the derived performance model is plugged into the Palladio Component Model (PCM), a model-based performance prediction approach. We validated the applicability of the approach and its prediction accuracy in a case study on different operating systems.
[9] Dennis Westermann, Jens Happe, Michael Hauck, and Christian Heupel. The performance cockpit approach: A framework for systematic performance evaluations. In Proceedings of the 36th EUROMICRO Conference on Software Engineering and Advanced Applications (SEAA 2010), 2010, pages 31-38. IEEE Computer Society. 2010. [ bib | .pdf | Abstract ]
Evaluating the performance (timing behavior, throughput, and resource utilization) of a software system becomes more and more challenging as today's enterprise applications are built on a large basis of existing software (e.g. middleware, legacy applications, and third party services). As the performance of a system is affected by multiple factors on each layer of the system, performance analysts require detailed knowledge about the system under test and have to deal with a huge number of tools for benchmarking, monitoring, and analyzing. In practice, performance analysts try to handle the complexity by focusing on certain aspects, tools, or technologies. However, these isolated solutions are inefficient due to the small reuse and knowledge sharing. The Performance Cockpit presented in this paper is a framework that encapsulates knowledge about performance engineering, the system under test, and analyses in a single application by providing a flexible, plug-in based architecture. We demonstrate the value of the framework by means of two different case studies.
[10] Michael Hauck, Michael Kuperberg, Klaus Krogmann, and Ralf Reussner. Modelling Layered Component Execution Environments for Performance Prediction. In Proceedings of the 12th International Symposium on Component Based Software Engineering (CBSE 2009), 2009, number 5582 in LNCS, pages 191-208. Springer. 2009. [ bib | DOI | .html | .pdf | Abstract ]
Software architects often use model-based techniques to analyse performance (e.g. response times), reliability and other extra-functional properties of software systems. These techniques operate on models of software architecture and execution environment, and are applied at design time for early evaluation of design alternatives, especially to avoid implementing systems with insufficient quality. Virtualisation (such as operating system hypervisors or virtual machines) and multiple layers in execution environments (e.g. RAID disk array controllers on top of hard disks) are becoming increasingly popular in reality and need to be reflected in the models of execution environments. However, current component meta-models do not support virtualisation and cannot model individual layers of execution environments. This means that the entire monolithic model must be recreated when different implementations of a layer must be compared to make a design decision, e.g. when comparing different Java Virtual Machines. In this paper, we present an extension of an established model-based performance prediction approach and associated tools which allow to model and predict state-of-the-art layered execution environments, such as disk arrays, virtual machines, and application servers. The evaluation of the presented approach shows its applicability and the resulting accuracy of the performance prediction while respecting the structure of the modelled resource environment.

Technical Reports

[1] Ralf Reussner, Steffen Becker, Erik Burger, Jens Happe, Michael Hauck, Anne Koziolek, Heiko Koziolek, Klaus Krogmann, and Michael Kuperberg. The Palladio Component Model. Technical report, KIT, Fakultät für Informatik, Karlsruhe, 2011. [ bib | http | Abstract ]
This report introduces the Palladio Component Model (PCM), a novel software component model for business information systems, which is specifically tuned to enable model-driven quality-of-service (QoS, i.e., performance and reliability) predictions. The PCMs goal is to assess the expected response times, throughput, and resource utilization of component-based software architectures during early development stages. This shall avoid costly redesigns, which might occur after a poorly designed architecture has been implemented. Software architects should be enabled to analyse different architectural design alternatives and to support their design decisions with quantitative results from performance or reliability analysis tools.
[2] Michael Hauck, Matthias Huber, Markus Klems, Samuel Kounev, Jörn Müller-Quade, Alexander Pretschner, Ralf Reussner, and Stefan Tai. Challenges and Opportunities of Cloud Computing - Trade-off Decisions in Cloud Computing Architecture. Technical Report 2010-19, Karlsruhe Institue of Technology, Faculty of Informatics, 2010. [ bib | http ]

Theses

[1] Michael Hauck. Automated Experiments for Deriving Performance-relevant Properties of Software Execution Environments. PhD thesis, Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany, 2013. [ bib | http | http | Abstract ]
The software execution environment can play a crucial role when analyzing the performance of a software system. However, detecting execution environment properties and integrating such properties into performance analyses is a manual, error-prone task that requires expert knowledge on the execution environment. In this thesis, a novel approach for detecting performance-relevant properties of the software execution environment is presented. These properties are automatically detected using predefined experiments and integrated into performance prediction tools. Based on a metamodel for experiment specification, the approach is used to design experiments for detecting different CPU, OS scheduling, and virtualization properties. This thesis also includes different case studies which demonstrate the applicability of the approach.
[2] Michael Hauck. Extending Performance-Oriented Resource Modelling in the Palladio Component Model. Diploma thesis, University of Karlsruhe (TH), Germany, February 2009. [ bib | .pdf | Abstract ]
The performance of a software system is strongly in uenced by the execution environment the software runs in. In the Palladio Component Model (PCM), a domain-specific language for modelling component-based software systems, the execution environment must be modelled explicitly as it is needed for performance predictions. However, the current version of the PCM offers only rudimentary support for hardware resource modelling: For instance, it is not possible to distinguish between read and write accesses to a hard disk resource. This thesis develops an enhancement of the PCM meta-model that allows for better predictions based on more sophisticated resource models. The enhancement includes the support for accessing resources through explicit interfaces with distinct services and the integration of resource controllers in the meta-model. To support modelling of infrastructure components such as application servers, this thesis introduces the separation of business interfaces and interfaces for accessing resources or the execution environment. Existing PCM tools have been adapted to support the simulation of PCM instances based on the enhanced meta-model. Additionally, the adapted meta-model has been successfully evaluated in two case studies to show that the extended meta-model has no side e ects on preexisting predictions and also enables scenarios not supported before, such as the modelling of a Java Virtual Machine which processes higher-level resource demands.