Home | Sitemap | Index | Contact | Legals | KIT

Publications of Dennis Westermann

Refereed conference/Workshop papers

[1] Christian Weiss, Dennis Westermann, Christoph Heger, and Martin Moser. Systematic performance evaluation based on tailored benchmark applications. In Proceedings of the 4th ACM/SPEC International Conference on Performance Engineering, Prague, Czech Republic, 2013, ICPE '13, pages 411-420. ACM, New York, NY, USA. 2013, Industrial Track. [ bib | DOI | http | .pdf ]
[2] Dennis Westermann, Jens Happe, and Roozbeh Farahbod. An experiment specification language for goal-driven, automated performance evaluations. In Proc. of the ACM Symposium on Applied Computing, SAC 2013, 2013, page to appear. [ bib ]
[3] Dennis Westermann, Jens Happe, Rouven Krebs, and Roozbeh Farahbod. Automated inference of goal-oriented performance prediction functions. In Proceedings of the 27th IEEE/ACM International Conference On Automated Software Engineering (ASE 2012), Essen, Germany, September 3-7, 2012. [ bib ]
[4] Alexander Wert, Jens Happe, and Dennis Westermann. Integrating software performance curves with the palladio component model. In Proceedings of the third joint WOSP/SIPEW international conference on Performance Engineering, 2012, pages 283-286. ACM. [ bib | http ]
[5] Dennis Westermann. A generic methodology to derive domain-specific performance feedback for developers. In Proceedings of the 34th International Conference on Software Engineering (ICSE 2012), Doctoral Symposium, Zuerich, Switzerland, 2012. ACM, New York, NY, USA. 2012. [ bib | .pdf ]
[6] Westermann Dennis, Krebs Rouven, and Happe Jens. Efficient Experiment Selection in Automated Software Performance Evaluations. In Proceedings of the Computer Performance Engineering - 8th European Performance Engineering Workshop (EPEW 2011), Borrowdale, UK, October 12-13, 2011, pages 325-339. Springer. October 2011. [ bib | .pdf ]
[7] Dennis Westermann and Jens Happe. Performance Cockpit: Systematic Measurements and Analyses. In ICPE'11: Proceedings of the 2nd ACM/SPEC International Conference on Performance Engineering, Karlsruhe, Germany, 2011. ACM, New York, NY, USA. 2011. [ bib | http ]
[8] Dennis Westermann and Jens Happe. Towards performance prediction of large enterprise applications based on systematic measurements. In Proceedings of the Fifteenth International Workshop on Component-Oriented Programming (WCOP) 2010, Barbora Bühnová, Ralf H. Reussner, Clemens Szyperski, and Wolfgang Weck, editors, June 2010, volume 2010-14 of Interne Berichte, pages 71-78. Karlsruhe Institue of Technology, Faculty of Informatics, Karlsruhe, Germany. June 2010. [ bib | http | Abstract ]
Understanding the performance characteristics of enterprise applications, such as response time, throughput, and resource utilization, is crucial for satisfying customer expectations and minimizing costs of application hosting. Enterprise applications are usually based on a large set of existing software (e.g. middleware, legacy applications, and third party services). Furthermore, they continuously evolve due to changing market requirements and short innovation cycles. Software performance engineering in its essence is not directly applicable to such scenarios. Many approaches focus on early lifecycle phases assuming that a software system is built from scratch and all its details are known. These approaches neglect influences of already existing middleware, legacy applications, and third party services. For performance prediction, detailed information about the internal structure of the systems is necessary. However, such information may not be available or accessible due to the complexity of existing software. In this paper, we propose a combined approach of model based and measurement based performance evaluation techniques to handle the complexity of large enterprise applications. We outline open research questions that have to be answered in order to put performance engineering in industrial practice. For validation, we plan to apply our approach to different real-world scenarios that involve current SAP enterprise solutions such as SAP Business ByDesign and the SAP Business Suite.
[9] Jens Happe, Dennis Westermann, Kai Sachs, and Lucia Kapova. Statistical Inference of Software Performance Models for Parametric Performance Completions. In Research into Practice - Reality and Gaps (Proceedings of QoSA 2010), George Heineman, Jan Kofron, and Frantisek Plasil, editors, 2010, volume 6093 of Lecture Notes in Computer Science (LNCS), pages 20-35. Springer. 2010. [ bib | .pdf | Abstract ]
Software performance engineering (SPE) enables software architects to ensure high performance standards for their applications. However, applying SPE in practice is still challenging. Most enterprise applications include a large software basis, such as middleware and legacy systems. In many cases, the software basis is the determining factor of the system's overall timing behavior, throughput, and resource utilization. To capture these influences on the overall system's performance, established performance prediction methods (modelbased and analytical) rely on models that describe the performance-relevant aspects of the system under study. Creating such models requires detailed knowledge on the system's structure and behavior that, in most cases, is not available. In this paper, we abstract from the internal structure of the system under study. We focus our efforts on message-oriented middleware and analyze the dependency between the MOM's usage and its performance. We use statistical inference to conclude these dependencies from observations. For ActiveMQ 5.3, the resulting functions predict the performance with an relative mean square error 0.1.
[10] Dennis Westermann, Jens Happe, Michael Hauck, and Christian Heupel. The performance cockpit approach: A framework for systematic performance evaluations. In Proceedings of the 36th EUROMICRO Conference on Software Engineering and Advanced Applications (SEAA 2010), 2010, pages 31-38. IEEE Computer Society. 2010. [ bib | .pdf | Abstract ]
Evaluating the performance (timing behavior, throughput, and resource utilization) of a software system becomes more and more challenging as today's enterprise applications are built on a large basis of existing software (e.g. middleware, legacy applications, and third party services). As the performance of a system is affected by multiple factors on each layer of the system, performance analysts require detailed knowledge about the system under test and have to deal with a huge number of tools for benchmarking, monitoring, and analyzing. In practice, performance analysts try to handle the complexity by focusing on certain aspects, tools, or technologies. However, these isolated solutions are inefficient due to the small reuse and knowledge sharing. The Performance Cockpit presented in this paper is a framework that encapsulates knowledge about performance engineering, the system under test, and analyses in a single application by providing a flexible, plug-in based architecture. We demonstrate the value of the framework by means of two different case studies.
[11] Dennis Westermann and Christof Momm. Using software performance curves for dependable and cost-efficient service hosting. In Proceedings of the 2nd International Workshop on the Quality of Service-Oriented Software Systems, Oslo, Norway, 2010, QUASOSS '10, pages 3:1-3:6. ACM, New York, NY, USA. 2010. [ bib | DOI | http | .pdf | Abstract ]
The upcoming business model of providing software as a service (SaaS) bears a lot of challenges to a service provider. On the one hand, service providers have to guarantee a certain quality of service (QoS) and ensure that they adhere to these guarantees at runtime. On the other hand, they have to minimize the total cost of ownership (TCO) of their IT landscape in order to offer competitive prices. The performance of a system is a critical attribute that affects QoS as well as TCO. However, the evaluation of performance characteristics is a complex task. Many existing solutions do not provide the accuracy required for offering dependable guarantees. One major reason for this is that the dependencies between the usage profile (provided by the service consumers) and the performance of the actual system is barely described sufficiently. Software Performance Curves are performance models that are derived by goal-oriented systematic measurements of the actual software service. In this paper, we describe how Software Performance Curves can be derived by a service provider that hosts a multi-tenant system. Moreover, we illustrate how Software Performance Curves can be used to derive feasible performance guarantees, develop pricing functions, and minimize hardware resources.