Home | Sitemap | Index | Contact | Legals | KIT

Recent Publications

Handbook of Research on Service-Oriented Systems and Non-Functional Properties: Future Directions, IGI Global, 2012
Handbook of Research on Service-Oriented Systems and Non-Functional Properties: Future Directions, IGI Global, 2012

Further publications can be found in the Karlsruhe Series on Software Design and Quality.

Publications 2016

[1] Erik Burger and Oliver Schneider. Translatability and Translation of Updated Views in ModelJoin. In Theory and Practice of Model Transformations - 9th International Conference, ICMT 2016, Held as Part of STAF 2016, Vienna, Austria, July 2016, Lecture Notes in Computer Science. Springer, Berlin, Heidelberg. July 2016, to appear. [ bib ]
[2] Georg Hinkel and Thomas Goldschmidt. Tool Support for Model Transformations: On Solutions using Internal Languages. In Modellierung 2016, Karlsruhe, Germany, March 2-4, 2016. [ bib | slides | .pdf ]
[3] Georg Hinkel, Max Kramer, Erik Burger, Misha Strittmatter, and Lucia Happe. An Empirical Study on the Perception of Metamodel Quality. In Proceedings of the 4th International Conference on Model-Driven Engineering and Software Development, Rome, Italy, February 19-21, 2016, pages 145-152. [ bib | http | .pdf ]
[4] Misha Strittmatter and Amine Kechaou. The media store 3 case study system. Technical Report 2016,1, Faculty of Informatics, Karlsruhe Institute of Technology, February 2016. [ bib | http ]
[5] Robert Heinrich, Kiana Rostami, and Ralf Reussner. The CoCoME platform for collaborative empirical research on information system evolution. Technical Report 2016,2; Karlsruhe Reports in Informatics, Karlsruhe Institute of Technology, February 2016. [ bib | http ]
[6] Michele Ciavotta, Michele Ardagna, and Anne Koziolek. Palladio optimization suite: Qos optimization for component-based cloud applications. In Proceedings of the 9th EAI International Conference on Performance Evaluation Methodologies and Tools, Berlin, Germany, 2016, VALUETOOLS'15, pages 170-171. ICST (Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering), ICST, Brussels, Belgium, Belgium. 2016. [ bib | DOI | http | .pdf ]
[7] Georg Hinkel, Oliver Denninger, Sebastian Krach, and Henning Groenda. Experiences with Model-driven Engineering in Neurorobotics. In Modelling Foundations and Applications. Springer, 2016. accepted, to appear. [ bib | .pdf | Abstract ]
Model-driven engineering (MDE) has been successfully adopted in domains such as automation or embedded systems. However, in many other domains, MDE is rarely applied. In this paper, we describe our experiences of applying MDE techniques in the domain of neurorobotics - a combination of neuroscience and robotics, studying the embodiment of autonomous neural systems. In particular, we participated in the development of the Neurorobotics Platform (NRP) - an online platform for describing and running neurorobotic experiments by coupling brain and robot simulations. We explain why MDE was chosen and discuss conceptual and technical challenges, such as inconsistent understanding of models, focus of the development and platform-barriers.
[8] Georg Hinkel. NMF: A Modeling Framework for the .NET Platform. Technical report, Karlsruhe, 2016. [ bib | http ]
[9] Max E. Kramer and Kirill Rakhman. Automated inversion of attribute mappings in bidirectional model transformations. In Proceedings of the 5th International Workshop on Bidirectional Transformations (Bx 2016), Anthony Anjorin and Jeremy Gibbons, editors, Eindhoven, The Netherlands, 2016, volume 1571 of CEUR Workshop Proceedings, pages 61-76. CEUR-WS.org. 2016. [ bib | http | .pdf ]
[10] Max E. Kramer and Kirill Rakhman. Proofs for the automated inversion of attribute mappings in bidirectional model transformations. Technical report, Karlsruhe Institute of Technology, Department of Informatics, Karlsruhe, 2016. [ bib | DOI | http | http ]
[11] Sebastian Fiss, Max E. Kramer, and Michael Langhammer. Automatically binding variables of invariants to violating elements in an ocl-aligned xbase-language. In Proceedings of Modellierung 2016, Andreas Oberweis and Ralf Reussner, editors, 2016, volume P-254 of Lecture Notes in Informatics (LNI), pages 189-204. Gesellschaft für Informatik e.V. (GI), Bonn, Germany. 2016. [ bib | Abstract ]
Constraints that have to hold for all models of a modeling language are often specified as invariants using the Object Constraint Language (OCL). If violations of such invariants shall be documented or resolved in a software system, the exact model elements that violate these conditions have to be computed. OCL validation engines provide, however, only a single context element at which a check for a violated invariant originated. Therefore, the computation of elements that caused an invariant violation is often specified in addition to the invariant declaration with redundant information. These redundancies can make it hard to develop and maintain systems that document or resolve invariant violations. In this paper, we present an automated approach and tool for declaring and binding parameters of invariants to violating elements based on boolean invariant expressions that are similar to OCL invariants. The tool computes a transformed invariant that returns violating elements for each iterator variable of the invariant expression that matches an explicitly declared invariant parameter. The approach can be used for OCL invariants and all models of languages conforming to the Meta-Object Facility (MOF) standard. We have evaluated our invariant language and transformation tool by transforming 88 invariants of the Unified Modeling Language (UML).
[12] Stephan Seifermann and Henning Groenda. Survey on textual notations for the unified modeling language. In Proceedings of the 4th International Conference on Model-Driven Engineering and Software Development (MODELSWARD 2016), 2016, pages 28-39. SciTePress. 2016. [ bib | .pdf ]
[13] Robert Heinrich. Architectural run-time models for performance and privacy analysis in dynamic cloud applications. ACM SIGMETRICS Performance Evaluation Review, 43(4):13-22, 2016, ACM, New York, NY, USA. [ bib | DOI | http ]
[14] Stephan Seifermann, Emre Taspolatoglu, Robert Heinrich, and Ralf Reussner. Challenges in secure software evolution - the role of software architecture. In 3rd Collaborative Workshop on Evolution and Maintenance of Long-Living Software Systems, 2016, Softwaretechnik-Trends. accepted, to appear. [ bib | .pdf ]
[15] Axel Busch and Anne Koziolek. Considering Not-quantified Quality Attributes in an Automated Design Space Exploration. In Proceedings of the 12th International ACM SIGSOFT Conference on the Quality of Software Architectures, Venice, Italy, 2016, QoSA'16, pages 99-108. ACM. 2016. [ bib | DOI | .pdf | Abstract ]
In a software design process, the quality of the resulting software system is highly driven by the quality of its software architecture. In such a process trade-off decisions must be made between multiple quality attributes, such as performance or security, that are often competing. Several approaches exist to improve software architectures either quantitatively or qualitatively. The first group of approaches requires to quantify each single quality attribute to be considered in the design process, while the latter group of approaches are often fully manual processes. However, time and cost constraints often make it impossible to either quantify all relevant quality attributes or manually evaluate candidate architectures. Our approach to the problem is to quantify several most important quality requirements, combine them with several not-quantified quality attributes and use them together in an automated design space exploration process. As our basis, we used the PerOpteryx design space exploration approach, which requires quantified measures for its optimization engine, and extended it in order to combine them with not-quantified quality attributes. By this, our approach allows optimizing the design space by considering even quality attributes that can not be quantified due to cost constraints or lack of quantification methodologies. We applied our approach to two case studies to demonstrate its benefits. We showed how performance can be balanced against not-quantified quality attributes, such as security, using an example derived from an industry case study.
[16] Christian Stier and Anne Koziolek. Considering Transient Effects of Self-Adaptations in Model-Driven Performance Analyses. In Proceedings of the 12th International ACM SIGSOFT Conference on the Quality of Software Architectures, Venice, Italy, 2016, QoSA'16. ACM. 2016, accepted, to appear. [ bib | Abstract ]
Model-driven performance engineering allows software architects to reason on performance characteristics of a software system in early design phases. In recent years, model-driven analysis techniques have been developed to evaluate performance characteristics of self-adaptive software systems. These techniques aim to reason on the ability of a self-adaptive software system to fulfill performance requirements in transient phases. A transient phase is the interval in which the behavior of the system changes, e.g., due to a burst in user requests. However, the effectiveness and efficiency with which a system is able to adapt depends not only on the time when it triggers adaptation actions but also on the time at which they are completed. Executing an adaptation action can cause additional stress on the adapted system. This can further impede the performance of the system in the transient phase. Model-driven analyses of self-adaptive software do not consider these transient effects. This paper outlines an approach for evaluating transient effects in model-driven analyses of self-adaptive software systems. The evaluation applied our approach to a horizontally scaling media hosting application in three experiments. By considering the delay in booting new Virtual Machines (VMs), we were able to improve the accuracy of predicted response times. The second and third experiment demonstrated that the increased accuracy enables an early detection and resolution of design deficiencies of self-adaptive software systems.
[17] Christian Stier and Henning Groenda. Ensuring Model Continuity when Simulating Self-Adaptive Software Systems. In Proceedings of the Symposium on Modeling and Simulation of Complexity in Intelligent, Adaptive and Autonomous Systems (MSCIAAS), part of the 2016 Spring Simulation Multiconference (SpringSim '16), Pasadena, CA, USA, 2016, MSCIAAS. Society for Computer Simulation International. 2016, accepted, to appear. [ bib | Abstract ]
Self-adaptivity in software systems aims to balance the use of costly resources, i.e. of servers and energy, under given constraints such as Quality of Service (QoS) requirements. Simulation does not require risky testing in running systems and has less assumptions and limitations than formal verification when evaluating the effect of self-adaptation mechanisms. Existing simulation frameworks for analyzing self-adaptive software systems require re-implementing algorithms to conform to the abstraction and interfaces of the simulation framework. We present an approach for coupling simulation-based analyses of self-adaptive software systems with self-adaptation mechanisms that eliminates the need to re-implement the mechanisms and ensures model continuity. The evaluation demonstrates the low complexity required when our approach is used to ensure model continuity between simulation and self-adaptation framework. It presents the results of two experiments we performed after coupling the SimuLizar simulation framework and the CACTOS runtime management framework for Cloud platforms. With this coupling, Cloud data center operators benefit from what-if-analyses of self-adaptation mechanisms and software engineers can optimize the QoS of systems on the drawing board without acquiring deep knowledge of simulation internals.
[18] Matthias Budde, Sarah Grebing, Erik Burger, Max E. Kramer, Bernhard Beckert, Michael Beigl, and Ralf Reussner. Praxis der Forschung - Eine Lehrveranstaltung des forschungsnahen Lehrens und Lernens in der Informatik am KIT. Neues Handbuch Hochschullehre, 74(A 3.19), 2016, DUZ Verlags- und Medienhaus GmbH. [ bib | Abstract ]
Der neue Lehrveranstaltungstyp Praxis der Forschung wurde 2012 im Master-Studiengang Informatik des Karlsruher Instituts für Technologie (KIT) eingeführt. Zentrales Konzept dieser Veranstaltung ist das forschungsnahe Lehren und Lernen: Studierende erwerben im Rahmen eines eigenen Forschungsprojekts sowohl Fachwissen als auch methodische Kompetenz zu wissenschaftlicher Arbeit. Die konkrete Ausgestaltung folgt den Grundsätzen der Forschungsnähe und der integrierten Vermittlung methodischer Kompetenzen. Die Studierenden sollen insbesondere auch erfahren, dass es ein wesentlicher Aspekt der wissenschaftlichen Arbeit ist, Forschungsergebnisse sicht- und wahrnehmbar zu machen.
[19] Misha Strittmatter and Robert Heinrich. Challenges in the evolution of metamodels. In 3rd Collaborative Workshop on Evolution and Maintenance of Long-Living Software Systems, 2016, Softwaretechnik-Trends. accepted, to appear. [ bib | slides | .pdf ]
[20] Robert Heinrich, Philipp Merkle, Jörg Henß, and Barbara Paech. Integrated performance simulation of business processes and information systems. In Software Engineering 2016, Fachtagung des GI-Fachbereichs Softwaretechnik, 2016, pages 51-52. [ bib | .html | .pdf ]
[21] Robert Heinrich and Rainer Neumann, editors. Studierendenprogramm der Fachtagung „Modellierung 2016“, volume 2016,6 of Karlsruhe Reports in Informatics. Karlsruhe Institute of Technology, Faculty of Informatics, 2016. [ bib | DOI | http ]
[22] Stephan Seifermann. Architectural data flow analysis. In Proceedings of the 13th Working IEEE/IFIP Conference on Software Architecture, Venice, Italy, 2016, WICSA'16. IEEE. 2016, accepted, to appear. [ bib | .pdf ]
[23] Emre Taspolatoglu. Context-based architectural security analysis. In Proceedings of the 13th Working IEEE/IFIP Conference on Software Architecture, Venice, Italy, 2016, WICSA'16. IEEE. 2016, accepted, to appear. [ bib ]
[24] Heiko Klare, Michael Langhammer, and Max E. Kramer. Projecting UML Class Diagrams from Java Code Models. Karlsruhe, Germany, 2016, VAO '16. [ bib ]
[25] Michael Langhammer, Arman Shahbazian, Nenad Medvidovic, and Ralf Reussner. Automated Extraction of Rich Software Models from Limited System Information. In Proceedings of the 13th Working IEEE/IFIP Conference on Software Architecture, Venice, Italy, 2016, WICSA'16. IEEE. 2016. [ bib ]
[26] Birgit Vogel-Heuser, Thomas Simon, Jens Folmer, Robert Heinrich, Kiana Rostami, and Ralf H. Reussner. Towards a common classification of changes for information and automated production systems as precondition for maintenance effort estimation. In IEEE - INDIN 2016: 14th International Conference on Industrial Informatics, 2016. accepted, to appear. [ bib ]
[27] Reiner Jung, Robert Heinrich, and Wilhelm Hasselbring. Geco: A generator composition approach for aspect-oriented dsls. In 9th International Conference on Model Transformation, 2016. accepted, to appear. [ bib ]

 TOP

Publications 2015

[1] Henning Groenda, Christoph Rathfelder, and Emre Taspolatoglu. Sensidl: Ein werkzeug zur vereinfachung der schnittstellenimplementierung intelligenter sensoren. In Themenspecial Internet der Dinge 2015, November 2015, page 4. [ bib | .pdf | Abstract ]
Die allgegenwärtige mobile Nutzung des Internets sowie die zunehmende Integration von Kommunikationsfähigkeiten in Alltagsgegenstände sowohl im Heimbereich als auch im industriellen Umfeld, besser bekannt als das Internet der Dinge, führen zu einer zunehmenden Vernetzung verschiedenster Systeme. Im Heimbereich werden Fernseher, Smartphones, aber auch Licht-, Fenster- und Heizungssteuerungen, Kühlschränke und ganze Hausautomatisierungssysteme vernetzt. Im Industrieumfeld wird die Vernetzung als Teil der vierten industriellen Revolution stark intensiviert. Die Bandbreite der eingesetzten Systeme reicht von hochleistungsfähigen Server- und PC-Systemen über Cloud-Dienste und mobile Endgeräte, wie Smartphones und Tablets, bis zu intelligenten eingebetteten mobilen oder stationären heterogenen Sensorsystemen mit eingeschränkter Energieversorgung und begrenzten Rechenkapazitäten.
[2] Sergej Svorobej, James Byrne, Paul Liston, PJ Byrne, Christian Stier, Henning Groenda, Zafeirios Papazachos, and Dimitrios Nikolopoulos. Towards automated data-driven model creation for cloud computing simulation. In Eighth EAI International Conference on Simulation Tools and Techniques (SIMUTOOLS), August 2015. ACM. August 2015. [ bib | DOI ]
[3] Georg Hinkel and Lucia Happe. An NMF Solution to the TTC Train Benchmark Case. In Proceedings of the 8th Transformation Tool Contest, a part of the Software Technologies: Applications and Foundations (STAF 2015) federation of conferences, Louis Rose, Tassilo Horn, and Filip Krikava, editors, L'Aquila, Italy, July 24, 2015, volume 1524 of CEUR Workshop Proceedings, pages 142-146. CEUR-WS.org. July 2015. [ bib | .pdf ]
[4] Georg Hinkel. An NMF Solution to the Java Refactoring Case. In Proceedings of the 8th Transformation Tool Contest, a part of the Software Technologies: Applications and Foundations (STAF 2015) federation of conferences, Louis Rose, Tassilo Horn, and Filip Krikava, editors, L'Aquila, Italy, July 24, 2015, volume 1524 of CEUR Workshop Proceedings, pages 95-99. CEUR-WS.org. July 2015. [ bib | .pdf ]
[5] Nikolas Roman Herbst, Samuel Kounev, Andreas Weber, and Henning Groenda. BUNGEE: An Elasticity Benchmark for Self-Adaptive IaaS Cloud Environments. In Proceedings of the 10th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS 2015), Firenze, Italy, May 18-19, 2015. Acceptance rate: 29%. [ bib | slides | .pdf | Abstract ]
Today's infrastructure clouds provide resource elasticity (i.e. auto-scaling) mechanisms enabling self-adaptive resource provisioning to reflect variations in the load intensity over time. These mechanisms impact on the application performance, however, their effect in specific situations is hard to quantify and compare. To evaluate the quality of elasticity mechanisms provided by different platforms and configurations, respective metrics and benchmarks are required. Existing metrics for elasticity only consider the time required to provision and deprovision resources or the costs impact of adaptations. Existing benchmarks lack the capability to handle open workloads with realistic load intensity profiles and do not explicitly distinguish between the performance exhibited by the provisioned underlying resources, on the one hand, and the quality of the elasticity mechanisms themselves, on the other hand. In this paper, we propose reliable metrics for quantifying the timing aspects and accuracy of elasticity. Based on these metrics, we propose a novel approach for benchmarking the elasticity of Infrastructure-as-a-Service (IaaS) cloud platforms independent of the performance exhibited by the provisioned underlying resources. We show that the proposed metrics provide consistent ranking of elastic platforms on an ordinal scale. Finally, we present an extensive case study of real-world complexity demonstrating that the proposed approach is applicable in realistic scenarios and can cope with different levels of resource efficiency.
[6] Alexander Wert, Henning Schulz, and Christoph Heger. AIM: Adaptable Instrumentation and Monitoring for Automated Software Performance Analysis. In Automation of Software Test (AST), 2015 IEEE/ACM 10th International Workshop on, May 2015, pages 38-42. IEEE. May 2015. [ bib | DOI ]
[7] Sascha Alpers, Henning Groenda, and Klaus Krogmann. Orientierungswissen für dienstanbieter zur standardauswahl für cloud-dienstleistungen. Technical Report 5, Bundesministerium für Wirtschaft und Energie (BMWi), Kompetenzzentrum Trusted Cloud, April 2015. [ bib | .pdf ]
[8] Andreas Rentschler. Model Transformation Languages with Modular Information Hiding. PhD thesis, Karlsruhe Institute of Technology, Karlsruhe, Germany, April 2015. [ bib | DOI | http ]
[9] Robert Heinrich, Eric Schmieders, Reiner Jung, Wilhelm Hasselbring, Andreas Metzger, Klaus Pohl, and Ralf Reussner. Run-time architecture models for dynamic adaptation and evolution of cloud applications. Technical Report No. 1593, Kiel University, Kiel, Germany, April 2015. [ bib | http ]
[10] Klaus Krogmann, Matthias Naab, and Oliver Hummel. Agile Anti-Patterns - Warum viele Organisationen weniger agil sind, als sie denken. Entwickler Magazin Spezial: Agilität, 3, March 2015. re-published article. [ bib | .html ]
[11] Stephan Seifermann and Henning Groenda. Towards collaboration on accessible uml models. In Mensch und Computer 2015 - Workshopband, A. Weisbecker, M. Burmester, and A. Schmidt, editors, Stuttgart, Germany, 2015, pages 411-417. De Gruyter Oldenbourg. 2015. [ bib | .pdf ]
[12] Henning Groenda, Stephan Seifermann, Karin Müller, and Gerhard Jaworek. The cooperate assistive teamwork environment for software description languages. In Proceedings of the 13th European conference of the Association for the Advancement of Assistive Technology in Europe (AAATE 2015), Cecilia Sik-Lányi, Evert-Jan Hoogerwerf, Klaus Miesenberger, and Peter Cudd, editors, 2015, pages 111-118. IOS Press. 2015. [ bib | DOI | .pdf ]
[13] Henning Groenda. Anforderungen für die archivierung von patientendaten. In Schwerpunkt Medizintechnik, Fritz Münzel, editor, volume 02 of Technik in Bayern, pages 8-9. VDI Bezirksverein München, Oberbayern, Niederbayern e.V., 2015. [ bib ]
[14] Eya Ben Charrada, Anne Koziolek, and Martin Glinz. Supporting requirements update during software evolution. Journal of Software: Evolution and Process, 27(3):166-194, 2015. [ bib | DOI | http | http | Abstract ]
Updating the requirements specification when software systems evolve is a manual task that is expensive and time consuming. Therefore, maintainers usually apply the changes to the code directly and leave the requirements unchanged. This results in the requirements rapidly becoming obsolete and useless. In this paper, we propose an approach that supports the maintainer in keeping the requirements specification consistent with the implementation, by identifying the requirements that are impacted whenever the code is changed. Our approach works as follows. First, we analyse the changes that have been applied to the source code and detect if they are likely to impact the requirements or not. Second, we trace the requirements impacting changes back to the requirements specification to identify the parts that might need to be modified. The output of the tracing is a list of requirements that are sorted according to their likelihood of being impacted. Automatically identifying the parts of the requirements specification that are likely to need maintenance reduces the effort needed for keeping the requirements up-to-date and thus makes the task of the maintainer easier. When applying our approach in three cases studies, 70% to 100% of the impacted requirements were identified within a list that includes less than 20% of the total number of requirements in the specification
[15] Andreas Brunnert, André van Hoorn, Felix Willnecker, Alexandru Danciu, Wilhelm Hasselbring, Christoph Heger, Nikolas Roman Herbst, Pooyan Jamshidi, Reiner Jung, Jóakim von Kistowski, Anne Koziolek, Johannes Kroß, Simon Spinner, Christian Vögele, Jürgen Walter, and Alexander Wert. Performance-oriented devops: A research agenda. Technical Report SPEC-RG-2015-01, SPEC Research Group - DevOps Performance Working Group, Standard Performance Evaluation Corporation (SPEC), 2015. [ bib | http ]
[16] Fabian Brosig, Philipp Meier, Steffen Becker, Anne Koziolek, Heiko Koziolek, and Samuel Kounev. Quantitative evaluation of model-driven performance analysis and simulation of component-based architectures. Software Engineering, IEEE Transactions on, 41(2):157-175, Feb 2015. [ bib | DOI | Abstract ]
During the last decade, researchers have proposed a number of model transformations enabling performance predictions. These transformations map performance-annotated software architecture models into stochastic models solved by analytical means or by simulation. However, so far, a detailed quantitative evaluation of the accuracy and efficiency of different transformations is missing, making it hard to select an adequate transformation for a given context. This paper provides an in-depth comparison and quantitative evaluation of representative model transformations to, e.g., Queueing Petri Nets and Layered Queueing Networks. The semantic gaps between typical source model abstractions and the different analysis techniques are revealed. The accuracy and efficiency of each transformation are evaluated by considering four case studies representing systems of different size and complexity. The presented results and insights gained from the evaluation help software architects and performance engineers to select the appropriate transformation for a given context, thus significantly improving the usability of model transformations for performance prediction.
[17] Matthias Galster, Mehdi Mirakhorli, and Anne Koziolek. Twin peaks goes agile. SIGSOFT Softw. Eng. Notes, 40(5):47-49, 2015, ACM, New York, NY, USA. [ bib | DOI | http | .pdf ]
[18] Robert Heinrich, Philipp Merkle, Jörg Henss, and Barbara Paech. Integrating business process simulation and information system simulation for performance prediction. Software & Systems Modeling, pages 1-21, 2015, Springer Berlin Heidelberg. [ bib | DOI | http | Abstract ]
Business process (BP) designs and enterprise information system (IS) designs are often not well aligned. Missing alignment may result in performance problems at run-time, such as large process execution time or overloaded IS resources. The complex interrelations between BPs and ISs are not adequately understood and considered in development so far. Simulation is a promising approach to predict performance of both BP and IS designs. Based on prediction results, design alternatives can be compared and verified against requirements. Thus, BP and IS designs can be aligned to improve performance. In current simulation approaches, BP simulation and IS simulation are not adequately integrated. This results in limited prediction accuracy due to neglected interrelations between the BP and the IS in simulation. In this paper, we present the novel approach Integrated Business IT Impact Simulation (IntBIIS) to adequately reflect the mutual impact between BPs and ISs in simulation. Three types of mutual impact between BPs and ISs in terms of performance are specified. We discuss several solution alternatives to predict the impact of a BP on the performance of ISs and vice versa. It is argued that an integrated simulation of BPs and ISs is best suited to reflect their interrelations. We propose novel concepts for continuous modeling and integrated simulation. IntBIIS is implemented by extending the Palladio tool chain with BP simulation concepts. In a real-life case study with a BP and IS from practice, we validate the feasibility of IntBIIS and discuss the practicability of the corresponding tool support.
[19] Georg Hinkel. Change Propagation in an Internal Model Transformation Language. In Theory and Practice of Model Transformations, pages 3-17. Springer, 2015. [ bib | slides | .pdf | Abstract ]
Despite good results, Model-Driven Engineering (MDE) has not been widely adopted in industry. According to studies by Staron and Mohaghegi, the lack of tool support is one of the major reasons for this. Although MDE has existed for more than a decade now, tool support is still insufficient. An approach to overcome this limitation for model transformations, which are a key part of MDE, is the usage of internal languages that reuse tool support for existing host languages. On the other hand, these internal languages typically do not provide key features like change propagation or bidirectional transformation. In this paper, we present an approach to use a single internal model transformation language to create unidirectional and bidirectional model transformations with optional change propagation. In total, we currently provide 18 operation modes based on a single specification. At the same time, the language may reuse tool support for C#. We validate the applicability of our language using a synthetic example with a transformation from finite state machines to Petri nets where we achieved speedups of up to 48 compared to classical batch transformations.
[20] Georg Hinkel, Henning Groenda, Lorenzo Vannucci, Oliver Denninger, Nino Cauli, and Stefan Ulbrich. A Domain-Specific Language (DSL) for Integrating Neuronal Networks in Robot Control. In 2015 Joint MORSE/VAO Workshop on Model-Driven Robot Software Engineering and View-based Software-Engineering, 2015. [ bib | slides | .pdf | Abstract ]
Although robotics has made progress with respect to adaptability and interaction in natural environments, it cannot match the capabilities of biological systems. A promising approach to solve this problem is to create biologically plausible robot controllers that use detailed neuronal networks. However, this approach yields a large gap between the neuronal network and its connection to the robot on the one side and the technical implementation on the other. Existing approaches neglect bridging this gap between disciplines and their focus on different abstractions layers but manually hand-craft the simulations. This makes the tight technical integration cumbersome and error-prone impairing round-trip validation and academic advancements. Our approach maps the problem to model-driven engineering techniques and defines a domain-specific language (DSL) for integrating biologically plausible Neuronal Networks in robot control algorithms. It provides different levels of abstraction and sets an interface standard for integration. Our approach is implemented in the Neuro-Robotics Platform (NRP) of the Human Brain Project (HBP). Its practical applicability is validated in a minimalist experiment inspired by the Braitenberg vehicles based on the simulation of a four-wheeled Husky robot controlled by a neuronal network.
[21] Hoang Vu Nguyen, Klemens Böhm, Florian Becker, Bertrand Goldman, Georg Hinkel, and Emmanuel Müller. Identifying user interests within the data space - a case study with skyserver. In Proceedings of the 18th International Conference on Extending Database Technology, EDBT 2015, Brussels, Belgium, March 23-27, 2015., 2015, pages 641-652. [ bib | DOI | http | .pdf ]
[22] Lorenzo Vannucci, Alessandro Ambrosano, Nino Cauli, Ugo Albanese, Egidio Falotico, Stefan Ulbrich, Lars Pfotzer, Georg Hinkel, Oliver Denninger, Daniel Peppicelli, Luc Guyot, Axel Von Arnim, Stefan Deser, Patrick Maier, Rudiger Dillman, Gundrun Klinker, Paul Levi, Alois Knoll, Marc-Oliver Gewaltig, and Cecilia Laschi. A visual tracking model implemented on the iCub robot as a use case for a novel neurorobotic toolkit integrating brain and physics simulation. In Humanoid Robots (Humanoids), 2015 IEEE-RAS 15th International Conference on, Nov 2015, pages 1179-1184. [ bib | DOI ]
[23] Anne Koziolek. Interplay of design time optimization and run time optimization (talk abstract). In Model-driven Algorithms and Architectures for Self-Aware Computing Systems (Dagstuhl Seminar 15041), Dagstuhl Reports, Samuel Kounev, Xiaoyun Zhu, Jeffrey O. Kephart, and Marta Kwiatkowska, editors, volume 5, page 183. Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik, Dagstuhl, Germany, 2015. Issue 1. [ bib | DOI | http ]
[24] Max E. Kramer, Michael Langhammer, Dominik Messinger, Stephan Seifermann, and Erik Burger. Change-driven consistency for component code, architectural models, and contracts. In Proceedings of the 18th International ACM SIGSOFT Symposium on Component-Based Software Engineering, Montréal, QC, Canada, 2015, CBSE '15, pages 21-26. ACM, New York, NY, USA. 2015. [ bib | DOI | http | .pdf ]
[25] Max E. Kramer, Michael Langhammer, Dominik Messinger, Stephan Seifermann, and Erik Burger. Realizing change-driven consistency for component code, architectural models, and contracts in vitruvius. Technical report, Karlsruhe Institute of Technology, Department of Informatics, Karlsruhe, 2015. [ bib | http | http ]
[26] Max E. Kramer. A generative approach to change-driven consistency in multi-view modeling. In Proceedings of the 11th International ACM SIGSOFT Conference on Quality of Software Architectures, Montréal, QC, Canada, 2015, QoSA '15, pages 129-134. ACM, New York, NY, USA. 2015, 20th International Doctoral Symposium on Components and Architecture (WCOP '15). [ bib | DOI | http | .pdf ]
[27] Max E. Kramer, Michael Langhammer, Dominik Messinger, Stephan Seifermann, and Erik Burger. Change-driven multi-view consistency for component models, code, and contracts. Poster at the 18th International ACM SIGSOFT Symposium on Component-Based Software Engineering, 2015. [ bib ]
[28] Lukas Märtin, Anne Koziolek, and Ralf H. Reussner. Quality-oriented decision support for maintaining architectures of fault-tolerant space systems. In Proceedings of the 2015 European Conference on Software Architecture Workshops, Dubrovnik/Cavtat, Croatia, September 7-11, 2015, Ivica Crnkovic, editor, 2015, pages 49:1-49:5. ACM. 2015. [ bib | DOI | http ]
[29] Phu H. Nguyen, Max Kramer, Jacques Klein, and Yves Le Traon. An extensive systematic review on the Model-Driven Development of secure systems. Information and Software Technology, 68:62-81, 2015, Elsevier Science Publishers B. V. [ bib | DOI | http | Abstract ]
Context: Model-Driven Security (MDS) is as a specialised Model-Driven Engineering research area for supporting the development of secure systems. Over a decade of research on {MDS} has resulted in a large number of publications. Objective: To provide a detailed analysis of the state of the art in MDS, a systematic literature review (SLR ) is essential. Method: We conducted an extensive {SLR} on MDS. Derived from our research questions, we designed a rigorous, extensive search and selection process to identify a set of primary {MDS} studies that is as complete as possible. Our three-pronged search process consists of automatic searching, manual searching, and snowballing. After discovering and considering more than thousand relevant papers, we identified, strictly selected, and reviewed 108 {MDS} publications. Results: The results of our {SLR} show the overall status of the key artefacts of MDS, and the identified primary {MDS} studies. For example, regarding security modelling artefact, we found that developing domain-specific languages plays a key role in many {MDS} approaches. The current limitations in each {MDS} artefact are pointed out and corresponding potential research directions are suggested. Moreover, we categorise the identified primary {MDS} studies into 5 significant {MDS} studies, and other emerging or less common {MDS} studies. Finally, some trend analyses of {MDS} research are given. Conclusion: Our results suggest the need for addressing multiple security concerns more systematically and simultaneously, for tool chains supporting the {MDS} development cycle, and for more empirical studies on the application of {MDS} methodologies. To the best of our knowledge, this {SLR} is the first in the field of Software Engineering that combines a snowballing strategy with database searching. This combination has delivered an extensive literature study on MDS.
[30] Qais Noorshams, Axel Busch, Samuel Kounev, and Ralf Reussner. The Storage Performance Analyzer: Measuring, Monitoring, and Modeling of I/O Performance in Virtualized Environments. In Proceedings of the 6th ACM/SPEC International Conference on Performance Engineering, Austin, Texas, USA, 2015, ICPE '15. [ bib | DOI | http | .pdf ]
[31] Catia Trubiani, Anne Koziolek, and Lucia Happe. Exploiting software performance engineering techniques to optimise the quality of smart grid environments. In Proceedings of the 6th ACM/SPEC International Conference on Performance Engineering, Austin, Texas, USA, 2015, ICPE '15, pages 199-202. ACM, New York, NY, USA. 2015. [ bib | DOI | http | .pdf ]
[32] Alexander Wert, Henning Schulz, Christoph Heger, and Roozbeh Farahbod. Generic instrumentation and monitoring description for software performance evaluation. In Proceedings of the 6th ACM/SPEC International Conference on Performance Engineering, Austin, Texas, USA, 2015, ICPE '15, pages 203-206. ACM, New York, NY, USA. 2015. [ bib | DOI | http ]
[33] Alexander Wert. Dynamicspotter: Automatic, experiment-based diagnostics of performance problems (invited demonstration paper). In Proceedings of the 6th ACM/SPEC International Conference on Performance Engineering, Austin, Texas, USA, 2015, ICPE '15, pages 105-106. ACM, New York, NY, USA. 2015. [ bib | DOI | http ]
[34] Axel Busch, Qais Noorshams, Samuel Kounev, Anne Koziolek, Ralf Reussner, and Erich Amrehn. Automated Workload Characterization for I/O Performance Analysis in Virtualized Environments. In Proceedings of the ACM/SPEC International Conference on Performance Engineering, Austin, Texas, USA, 2015, ICPE '15, pages 265-276. ACM, New York, NY, USA. 2015, Acceptance Rate (Full Paper): 15/56 = 27%. [ bib | DOI | http | .pdf | Abstract ]
Next generation IT infrastructures are highly driven by virtualization technology. The latter enables flexible and efficient resource sharing allowing to improve system agility and reduce costs for IT services. Due to the sharing of resources and the increasing requirements of modern applications on I/O processing, the performance of storage systems is becoming a crucial factor. In particular, when migrating or consolidating different applications the impact on their performance behavior is often an open question. Performance modeling approaches help to answer such questions, a prerequisite, however, is to find an appropriate workload characterization that is both easy to obtain from applications as well as sufficient to capture the important characteristics of the application. In this paper, we present an automated workload characterization approach that extracts a workload model to represent the main aspects of I/O-intensive applications using relevant workload parameters, e.g., request size, read-write ratio, in virtualized environments. Once extracted, workload models can be used to emulate the workload performance behavior in real-world scenarios like migration and consolidation scenarios. We demonstrate our approach in the context of two case studies of representative system environments. We present an in-depth evaluation of our workload characterization approach showing its effectiveness in workload migration and consolidation scenarios. We use an IBM System z equipped with an IBM DS8700 and a Sun Fire system as state-of-the-art virtualized environments. Overall, the evaluation of our workload characterization approach shows promising results to capture the relevant factors of I/O-intensive applications.
[35] Axel Busch. Automated decision support for recurring design decisions considering non-functional requirements. In Software Engineering 2015 - Workshopband, 2015, GI Lecture Notes in Informatics. Doctoral Symposium. [ bib | .pdf ]
[36] Kiana Rostami, Johannes Stammel, Robert Heinrich, and Ralf Reussner. Architecture-based assessment and planning of change requests. In Proceedings of the 11th International ACM SIGSOFT Conference on Quality of Software Architectures, Montreal, QC, Canada, 2015, QoSA '15, pages 21-30. ACM, New York, NY, USA. 2015. [ bib | http | .pdf ]
[37] Kiana Rostami. Domain-spanning maintainability analysis for software-intensive systems. In Gemeinsamer Tagungsband der Workshops der Tagung Software Engineering 2015, Dresden, Germany, 17.-18. März 2015., 2015, pages 106-108. [ bib | .pdf ]
[38] Robert Heinrich, Kiana Rostami, Johannes Stammel, Thomas Knapp, and Ralf Reussner. Architecture-based analysis of changes in information system evolution. In Softwaretechnik-Trends, 2015, volume 35(2). [ bib | .pdf ]
[39] Alberto Avritzer, Laura Carnevali, Hamed Ghasemieh, Lucia Happe, Boudewijn R. Haverkort, Anne Koziolek, Daniel Menasche, Anne Remke, Sahra Sedigh Sarvestani, and Enrico Vicario. Survivability evaluation of gas, water and electricity infrastructures. In Proceedings of the Seventh International Workshop on the Practical Application of Stochastic Modelling (PASM), 2015, volume 310, pages 5 - 25. Electronic Notes in Theoretical Computer Science. 2015. [ bib | DOI | http | Abstract ]
Abstract The infrastructures used in cities to supply power, water and gas are consistently becoming more automated. As society depends critically on these cyber-physical infrastructures, their survivability assessment deserves more attention. In this overview, we first touch upon a taxonomy on survivability of cyber-physical infrastructures, before we focus on three classes of infrastructures (gas, water and electricity) and discuss recent modelling and evaluation approaches and challenges.
[40] Axel Busch, Misha Strittmatter, and Anne Koziolek. Assessing Security to Compare Architecture Alternatives of Component-Based Systems. In Proceedings of the IEEE International Conference on Software Quality, Reliability & Security, Vancouver, British Columbia, Canada, 2015, QRS '15, pages 99-108. IEEE Computer Society. 2015, Acceptance Rate (Full Paper): 20/91 = 22%. [ bib | DOI | .pdf | Abstract ]
Modern software development is typically performed by composing a software system from building blocks. The component-based paradigm has many advantages. However, security quality attributes of the overall architecture often remain unspecified and therefore, these cannot be considered when comparing several architecture alternatives. In this paper, we propose an approach for assessing security of component-based software architectures. Our hierarchical model uses stochastic modeling techniques and includes several security related factors, such as attackers, his goals, the security attributes of a component, and the mutual security interferences between them. Applied on a component-based architecture, our approach yields its mean time to security failure, which assesses its degree of security. We extended the Palladio Component Model (PCM) by the necessary information to be able to use it as input for the security assessment. We use the PCM representation to show the applicability of our approach on an industry related example.
[41] Christian Stier, Anne Koziolek, Henning Groenda, and Ralf Reussner. Model-Based Energy Efficiency Analysis of Software Architectures. In Proceedings of the 9th European Conference on Software Architecture (ECSA '15), Dubrovnik/Cavtat, Croatia, 2015, Lecture Notes in Computer Science. Springer. 2015, Acceptance Rate (Full Paper): 15/80 = 18.8%. [ bib | DOI | http | .pdf | Abstract ]
Design-time quality analysis of software architectures evaluates the impact of design decisions in quality dimensions such as performance. Architectural design decisions decisively impact the energy efficiency (EE) of software systems. Low EE not only results in higher operational cost due to power consumption. It indirectly necessitates additional capacity in the power distribution infrastructure of the target deployment environment. Methodologies that analyze EE of software systems are yet to reach an abstraction suited for architecture-level reasoning. This paper outlines a model-based approach for evaluating the EE of software architectures. First, we present a model that describes the central power consumption characteristics of a software system. We couple the model with an existing model-based performance prediction approach to evaluate the consumption characteristics of a software architecture in varying usage contexts. Several experiments show the accuracy of our architecture-level consumption predictions. Energy consumption predictions reach an error of less than 5.5% for stable and 3.7% for varying workloads. Finally, we present a round-trip design scenario that illustrates how the explicit consideration of EE supports software architects in making informed trade-off decisions between performance and EE.
[42] Sven Leonhardt, Benjamin Hettwer, Johannes Hoor, and Michael Langhammer. Integration of existing software artifacts into a view- and change-driven development approach. In Proceedings of the 2015 Joint MORSE/VAO Workshop on Model-Driven Robot Software Engineering and View-based Software-Engineering, L'Aquila, Italy, 2015, MORSE/VAO '15, pages 17-24. ACM, New York, NY, USA. 2015. [ bib | DOI | http ]
[43] Robert Heinrich, Stefan Gärtner, Tom-Michael Hesse, Thomas Ruhroth, Ralf Reussner, Kurt Schneider, Barbara Paech, and Jan Jürjens. A platform for empirical research on information system evolution. In 27th International Conference on Software Engineering and Knowledge Engineering, 2015, pages 415-420. Acceptance Rate (Full Paper) = 29%. [ bib | .pdf ]
[44] Wolf Zimmermann, Wolfgang Böhm, Clemens Grelck, Robert Heinrich, Reiner Jung, Marco Konersmann, Alexander Schlaefer, Eric Schmieders, Sibylle Schupp, Baltasar Trancón y Widemann, and Thorsten Weyer, editors. Gemeinsamer Tagungsband der Workshops der Tagung Software Engineering 2015, Dresden, Germany, 17.-18. März 2015, volume 1337 of CEUR Workshop Proceedings. CEUR-WS.org, 2015. [ bib | http ]
[45] Birgit Vogel-Heuser, Stefan Feldmann, Jens Folmer, Susanne Rösch, Robert Heinrich, Kiana Rostami, and Ralf H. Reussner. Architecture-based assessment and planning of software changes in information and automated production systems. In IEEE International Conference on Systems, Man, and Cybernetics. 2015. [ bib | DOI | http | .pdf ]
[46] Robert Heinrich, Reiner Jung, Eric Schmieders, Andreas Metzger, Wilhelm Hasselbring, Ralf Reussner, and Klaus Pohl. Architectural run-time models for operator-in-the-loop adaptation of cloud applications. In IEEE 9th Symposium on the Maintenance and Evolution of Service-Oriented Systems and Cloud-Based Environments, 2015. IEEE. 2015. [ bib | .pdf ]
[47] Qais Noorshams. Modeling and Prediction of I/O Performance in Virtualized Environments. PhD thesis, Karlsruhe Institute of Technology (KIT), 2015. [ bib | http ]
[48] Misha Strittmatter, Kiana Rostami, Robert Heinrich, and Ralf Reussner. A modular reference structure for component-based architecture description languages. In 2nd International Workshop on Model-Driven Engineering for Component-Based Systems (ModComp), pages 36-41. CEUR, 2015. [ bib | slides | .pdf ]
[49] Christian Vögele, Robert Heinrich, Robert Heilein, Helmut Krcmar, and André van Hoorn. Modeling complex user behavior with the palladio component model. In Softwaretechnik-Trends, 2015, volume 35(3). [ bib | .pdf ]
[50] Christian Stier, Anne Koziolek, Henning Groenda, and Ralf Reussner. Model-based analysis of energy efficiency for software architectures. Poster at the Symposium on Software Performance 2015, 2015. Best Poster Award. [ bib ]
[51] Henning Groenda and Christian Stier. Improving IaaS Cloud Analyses by Black-Box Resource Demand Modeling. In Symposium on Software Performance 2015, 2015. [ bib | .pdf ]
[52] Robert Heinrich, Stefan Gärtner, Tom-Michael Hesse, Thomas Ruhroth, Ralf Reussner, Kurt Schneider, Barbara Paech, and Jan Jürjens. The CoCoME platform: A research note on empirical studies in information system evolution. International Journal of Software Engineering and Knowledge Engineering, 25(09&10):1715-1720, 2015. [ bib | DOI | arXiv | http ]
[53] Michael Langhammer and Klaus Krogmann. A co-evolution approach for source code and component-based architecture models. In 17. Workshop Software-Reengineering und-Evolution, 2015, volume 4. [ bib | http ]
[54] Philipp Merkle and Holger Knoche. Extending the Palladio Component Model to Analyze Data Contention for Modernizing Transactional Software Towards Service-Orientation. In Proceedings of the Symposium on Software Performance (SSP) 2015, 2015, Softwaretechnik-Trends. [ bib | .pdf ]

 TOP

Publications 2014

[1] Robert Heinrich, Reiner Jung, Eric Schmieders, Andreas Metzger, Wilhelm Hasselbring, Klaus Pohl, and Ralf Reussner. Integrated observation and modeling techniques to support adaptation and evolution of software systems. In DFG Priority Program SPP1593, 4th Workshop, November 2014. [ bib | http | Abstract ]
iObserve is an approach to integrate model-driven monitoring with design time models of software systems and reuse those models at run time to realize analyses based on the design time model. It is assumed that this reduces the effort to be made to interpret analysis results of a software system.
[2] Misha Strittmatter and Michael Langhammer. Identifying semantically cohesive modules within the palladio meta-model. In Proceedings of the Symposium on Software Performance: Joint Descartes/Kieker/Palladio Days, Steffen Becker, Wilhelm Hasselbring, André van Hoorn, Samuel Kounev, and Ralf Reussner, editors, Stuttgart, Germany, November 26-28, 2014, pages 160-176. Universitätsbibliothek Stuttgart. November 2014. [ bib | slides | .pdf ]
[3] Misha Strittmatter. Enabling assembly of systems and its implications within the palladio component model. In Proceedings of the Symposium on Software Performance: Joint Descartes/Kieker/Palladio Days, Steffen Becker, Wilhelm Hasselbring, André van Hoorn, Samuel Kounev, and Ralf Reussner, editors, Stuttgart, Germany, November 26-28, 2014, page 9. Universitätsbibliothek Stuttgart. November 2014, Talk Abstract. [ bib | slides | .pdf ]
[4] Reiner Jung, Misha Strittmatter, Philipp Merkle, and Robert Heinrich. Evolution of the palladio component model: Process and modeling methods. In Proceedings of the Symposium on Software Performance: Joint Descartes/Kieker/Palladio Days, Steffen Becker, Wilhelm Hasselbring, André van Hoorn, Samuel Kounev, and Ralf Reussner, editors, Stuttgart, Germany, November 26-28, 2014, page 13. Universitätsbibliothek Stuttgart. November 2014, Talk Abstract. [ bib | slides | .pdf ]
[5] Christian Stier, Henning Groenda, and Anne Koziolek. Towards Modeling and Analysis of Power Consumption of Self-Adaptive Software Systems in Palladio. Technical report, University of Stuttgart, Faculty of Computer Science, Electrical Engineering, and Information Technology, November 2014. [ bib | slides | .pdf ]
[6] Samuel Kounev, Fabian Brosig, and Nikolaus Huber. The Descartes Modeling Language. Technical report, Department of Computer Science, University of Wuerzburg, October 2014. [ bib | http | http | .pdf | Abstract ]
This technical report introduces the Descartes Modeling Language (DML), a new architecture-level modeling language for modeling Quality-of-Service (QoS) and resource management related aspects of modern dynamic IT systems, infrastructures and services. DML is designed to serve as a basis for self-aware resource management during operation ensuring that system QoS requirements are continuously satisfied while infrastructure resources are utilized as efficiently as possible.
[7] Andreas Rentschler, Dominik Werle, Qais Noorshams, Lucia Happe, and Ralf Reussner. Remodularizing Legacy Model Transformations with Automatic Clustering Techniques. In Proceedings of the 3rd Workshop on the Analysis of Model Transformations co-located with the 17th International Conference on Model Driven Engineering Languages and Systems (AMT@MODELS '14), Valencia, Spain, September 29, 2014, Benoit Baudry, Jürgen Dingel, Levi Lucio, and Hans Vangheluwe, editors, October 2014, volume 1277 of CEUR Workshop Proceedings, pages 4-13. CEUR-WS.org. October 2014. [ bib | http | .pdf ]
[8] Benjamin Klatt, Klaus Krogmann, and Christoph Seidl. Program Dependency Analysis for Consolidating Customized Product Copies. In IEEE 30th International Conference on Software Maintenance and Evolution (ICSME'14), September 2014, pages 496-500. Victoria, Canada. [ bib | DOI | Abstract ]
To cope with project constraints, copying and customizing existing software products is a typical practice to flexibly serve customer-specific needs. In the long term, this practice becomes a limitation for growth due to redundant maintenance efforts or wasted synergy and cross selling potentials. To mitigate this limitation, customized copies need to be consolidated into a single, variable code base of a software product line (SPL). However, consolidation is tedious as one must identify and correlate differences between the copies to design future variability. For one, existing consolidation approaches lack support of the implementation level. In addition, approaches in the fields of difference analysis and feature detection are not sufficiently integrated for finding relationships between code modifications. In this paper, we present remedy to this problem by integrating a difference analysis with a program dependency analysis based on Program Dependency Graphs (PDG) to reduce the effort of consolidating developers when identifying dependent differences and deriving clusters to consider in their variability design. We successfully evaluated our approach on variants of the open source ArgoUML modeling tool, reducing the manual review effort about 72% with a precision of 99% and a recall of 80%. We further proved its industrial applicability in a case study on a commercial relationship management application.
[9] Erik Burger. Flexible Views for View-based Model-driven Development. PhD thesis, Karlsruhe Institute of Technology, Karlsruhe, Germany, July 2014. [ bib | DOI | http ]
[10] Reiner Jung, Robert Heinrich, Eric Schmieders, Misha Strittmatter, and Wilhelm Hasselbring. A method for aspect-oriented meta-model evolution. In Proceedings of the 2Nd Workshop on View-Based, Aspect-Oriented and Orthographic Software Modelling, York, United Kingdom, July 2014, VAO '14, pages 19:19-19:22. ACM, New York, NY, USA. July 2014. [ bib | DOI | slides | http | .pdf | Abstract ]
Long-living systems face many modifications and extensions over time due to changing technology and requirements. This causes changes in the models reflecting the systems, and subsequently in the underlying meta-models, as their structure and semantics are adapted to adhere these changes. Modifying meta-models requires adaptations in all tools realizing their semantics. This is a costly endeavor, especially for complex meta-models. To solve this problem we propose a method to construct and refactor meta-models to be concise and focused on a small set of concerns. The method results in simpler metamodel modification scenarios and fewer modifications, as new concerns and aspects are encapsulated in separate meta-models. Furthermore, we define design patterns based on the different roles meta-models play in software. Thus, we keep large and complex modeling projects manageable due to the improved adaptability of their meta-model basis.
[11] Klaus Krogmann, Matthias Naab, and Oliver Hummel. Agile Anti-Patterns - Warum viele Organisationen weniger agil sind, als sie denken. JAXenter Business Technology, 2.14:29-34, June 2014. [ bib | http | Abstract ]
Mit der zunehmenden Verbreitung agiler Softwareentwicklung steigt auch die Zahl problematischer Projekte. Ziele wie eine schnelle Reaktionsfähigkeit auf Änderungswünsche werden nicht erreicht, obwohl (vordergründig) nach agilen Grundsätzen vorgegangen wird. Im Artikel fassen wir wiederkehrende Praxiserlebnisse in Form von Anti-Patterns zusammen und schildern, wie agile Entwicklung wiederholt zu dogmatisch gelebt oder als Ausrede für schlechte Projektorganisation missbraucht wurde. Diese Anti-Patterns ermöglichen dem Leser eigene Projekte auf ähnliche Missstände zu prüfen und gegebenenfalls dagegen anzugehen.
[12] Jiri Vinarek, Petr Hnetynka, Viliam Simko, and Petr Kroha. Recovering traceability links between code and specification through domain model extraction. In Proceedings of 10th International Workshop, EOMAS 2014, Held at CAiSE 2014, Thessaloniki, Greece, June 16-17, 2014, volume 191 of LNBIP. [ bib | http | Abstract ]
Requirements traceability is an extremely important aspect of software development and especially of maintenance. Efficient maintaining of traceability links between high-level requirements specification and low-level implementation is hindered by many problems. In this paper, we propose a method for automated recovery of links between parts of the textual requirement specification and the source code of implementation. The described method is based on a method allowing extraction of a prototype domain model from plain text requirements specification. The proposed method is evaluated on two non-trivial examples. The performed experiments show that our method is able to link requirements with source code with the accuracy of F1=58-61%.
[13] Martin Küster and Klaus Krogmann. Checkable Code Decisions to Support Software Evolution. Softwaretechnik-Trends, 34(2):58-59, May 2014. [ bib | .pdf | Abstract ]
For the evolution of software, understanding of the context, i.e. history and rationale of the existing artifacts, is crucial to avoid ignorant surgery", i.e. modifications to the software without understanding its design intent. Existing works on recording architecture decisions have mostly focused on architectural models. We extend this to code models, and introduce a catalog of code decisions that can be found in object-oriented systems. With the presented approach, we make it possible to record design decisions that are concerned with the decomposition of the system into interfaces, classes, and references between them, or how exceptions are handled. Furthermore, we indicate how decisions on the usage of Java frameworks (e.g. for dependency injection) can be recorded. All decision types presented are supplied with OCL-constraints to check the validity of the decision based on the linked code model.
[14] Benjamin Klatt, Klaus Krogmann, and Volker Kuttruff. Developing Stop Word Lists for Natural Language Program Analysis. Softwaretechnik-Trends, 34(2):85-86, May 2014. [ bib | .pdf | Abstract ]
When implementing a software, developers express conceptual knowledge (e.g. about a specific feature) not only in program language syntax and semantics but also in linguistic information stored in identifiers (e.g. method or class names). Based on this habit, Natural Language Program Analysis (NLPA) is used to improve many different areas in software engineering such as code recommendations or program analysis. Simplified, NLPA algorithms collect identifier names and apply term processing such as camel case splitting (i.e. "MyIdentifier" to "My" and "Identifier") or stemming (i.e. "records" to "record") to subsequently perform further analyzes. In our research context, we search for code locations sharing similar terms to link them with each other. In such types of analysis, filtering stop words is essential to reduce the number of useless links.
[15] Benjamin Klatt, Klaus Krogmann, and Christian Wende. Consolidating Customized Product Copies to Software Product Lines. Softwaretechnik-Trends, 34(2):64-65, May 2014. [ bib | .pdf | Abstract ]
Reusing existing software solutions as initial point for new projects is a frequent approach in software business. Copying existing code and adapting it to customer-specific needs allows for exible and efficient software customization in the short term. But in the long term, a Software Product Line (SPL) approach with a single code base and explicitly managed variability reduces maintenance effort and eases instantiation of new products.
[16] Rouven Krebs, Simon Spinner, Nadia Ahmed, and Samuel Kounev. Resource Usage Control In Multi-Tenant Applications. In Proceedings of the 14th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid 2014), Chicago, IL, USA, May 26, 2014. IEEE/ACM. May 2014, Accepted for Publication. [ bib | .pdf | Abstract ]
Multi-tenancy is an approach to share one application instance among multiple customers by providing each of them a dedicated view. This approach is commonly used by SaaS providers to reduce the costs for service provisioning. Tenants also expect to be isolated in terms of the performance they observe and the providers inability to offer performance guarantees is a major obstacle for potential cloud customers. To guarantee an isolated performance it is essential to control the resources used by a tenant. This is a challenge, because the layers of the execution environment, responsible for controlling resource usage (e.g., operating system), normally do not have knowledge about entities defined at the application level and thus they cannot distinguish between different tenants. Furthermore, it is hard to predict how tenant requests propagate through the multiple layers of the execution environment down to the physical resource layer. The intended abstraction of the application from the resource controlling layers does not allow to solely solving this problem in the application. In this paper, we propose an approach which applies resource demand estimation techniques in combination with a request based admission control. The resource demand estimation is used to determine resource consumption information for individual requests. The admission control mechanism uses this knowledge to delay requests originating from tenants that exceed their allocated resource share. The proposed method is validated by a widely accepted benchmark showing its applicability in a setup motivated by today's platform environments.
[17] Ivan Dario Paez Anaya, Viliam Simko, Johann Bourcier, and Noel Plouzeau. A prediction-driven adaptation approach for self-adaptive sensor networks. In Proceedings of 9th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS'14), Hyderabad, India, May 31, 2014. [ bib | .pdf | Abstract ]
Engineering self-adaptive software in unpredictable environments such as pervasive systems, where network's ability, remaining battery power and environmental conditions may vary over the lifetime of the system is a very challenging task. Many current software engineering approaches leverage run-time architectural models to ease the design of the autonomic control loop of these self-adaptive systems. While these approaches perform well in reacting to various evolutions of the runtime environment, implementations based on reactive paradigms have a limited ability to anticipate problems, leading to transient unavailability of the system, useless costly adaptations, or resources waste. In this paper, we follow a proactive self-adaptation approach that aims at overcoming the limitation of reactive approaches. Based on predictive analysis of internal and external context information, our approach regulates new architecture reconfigurations and deploys them using models at runtime. We have evaluated our approach on a case study where we combined hourly temperature readings provided by National Climatic Data Center (NCDC) with fire reports from Moderate Resolution Imaging Spectroradiometer (MODIS) and simulated the behavior of multiple systems. The results confirm that our proactive approach outperforms a typical reactive system in scenarios with seasonal behavior.
[18] Benjamin Klatt, Klaus Krogmann, and Christian Wende. Consolidating Customized Product Copies to Software Product Lines. In 16th Workshop Software-Reengineering (WSRE'14), April 2014. Bad Honnef, Germany. [ bib | .pdf ]
[19] Benjamin Klatt, Klaus Krogmann, and Volker Kuttruff. Developing Stop Word Lists for Natural Language Program Analysis. In 16th Workshop Software-Reengineering (WSRE'14), April 2014. Bad Honnef, Germany. [ bib | .pdf ]
[20] Andreas Rentschler, Dominik Werle, Qais Noorshams, Lucia Happe, and Ralf Reussner. Designing Information Hiding Modularity for Model Transformation Languages. In Proceedings of the 13th International Conference on Modularity (AOSD '14), Lugano, Switzerland, April 22 - 26, 2014, April 2014, pages 217-228. ACM, New York, NY, USA. April 2014, Acceptance Rate: 35.0%. [ bib | DOI | http | .pdf ]
[21] Rouven Krebs and Manuel Loesch. Comparison of Request Admission Based Performance Isolation Approaches in Multi-Tenant SaaS Applications. In Proceedings of 4th International Conference On Cloud Computing And Services Science (CLOSER 2014), Barcelona, Spain, April 3, 2014. SciTePress. April 2014, Short Paper. [ bib | .pdf | Abstract ]
Multi-tenancy is an approach to share one application instance among multiple customers by providing each of them a dedicated view. This approach is commonly used by SaaS providers to reduce the costs for service provisioning. Tenants also expect to be isolated in terms of the performance they observe and the providers inability to offer performance guarantees is a major obstacle for potential cloud customers. To guarantee an isolated performance it is essential to control the resources used by a tenant. This is a challenge, because the layers of the execution environment, responsible for controlling resource usage (e.g., operating system), normally do not have knowledge about entities defined at the application level and thus they cannot distinguish between different tenants. Furthermore, it is hard to predict how tenant requests propagate through the multiple layers of the execution environment down to the physical resource layer. The intended abstraction of the application from the resource controlling layers does not allow to solely solving this problem in the application. In this paper, we propose an approach which applies resource demand estimation techniques in combination with a request based admission control. The resource demand estimation is used to determine resource consumption information for individual requests. The admission control mechanism uses this knowledge to delay requests originating from tenants that exceed their allocated resource share. The proposed method is validated by a widely accepted benchmark showing its applicability in a setup motivated by today's platform environments.
[22] Erik Burger and Aleksandar Toshovski. Difference-based Conformance Checking for Ecore Metamodels. In Proceedings of Modellierung 2014, Vienna, Austria, March 21, 2014, volume 225 of GI-LNI, pages 97-104. [ bib | .pdf ]
[23] Jóakim Gunnarson von Kistowski, Nikolas Roman Herbst, and Samuel Kounev. Modeling Variations in Load Intensity over Time. In Proceedings of the 3rd International Workshop on Large-Scale Testing (LT 2014), co-located with the 5th ACM/SPEC International Conference on Performance Engineering (ICPE 2014), Dublin, Ireland, March 22, 2014, pages 1-4. ACM, New York, NY, USA. March 2014. [ bib | DOI | slides | http | .pdf | Abstract ]
Today's software systems are expected to deliver reliable performance under highly variable load intensities while at the same time making efficient use of dynamically allocated resources. Conventional benchmarking frameworks provide limited support for emulating such highly variable and dynamic load profiles and workload scenarios. Industrial benchmarks typically use workloads with constant or stepwise increasing load intensity, or they simply replay recorded workload traces. Based on this observation, we identify the need for means allowing flexible definition of load profiles and address this by introducing two meta-models at different abstraction levels. At the lower abstraction level, the Descartes Load Intensity Meta-Model (DLIM) offers a structured and accessible way of describing the load intensity over time by editing and combining mathematical functions. The High-Level Descartes Load Intensity Meta-Model (HLDLIM) allows the description of load variations using few defined parameters that characterize the seasonal patterns, trends, bursts and noise parts. We demonstrate that both meta-models are capable of capturing real-world load profiles with acceptable accuracy through comparison with a real life trace.
[24] Jóakim Gunnarson von Kistowski, Nikolas Roman Herbst, and Samuel Kounev. LIMBO: A Tool For Modeling Variable Load Intensities (Demonstration Paper). In Proceedings of the 5th ACM/SPEC International Conference on Performance Engineering (ICPE 2014), Dublin, Ireland, March 22-26, 2014, ICPE '14, pages 225-226. ACM, New York, NY, USA. March 2014. [ bib | DOI | slides | http | .pdf | Abstract ]
Modern software systems are expected to deliver reliable performance under highly variable load intensities while at the same time making efficient use of dynamically allocated resources. Conventional benchmarking frameworks provide limited support for emulating such highly variable and dynamic load profiles and workload scenarios. Industrial benchmarks typically use workloads with constant or stepwise increasing load intensity, or they simply replay recorded workload traces. In this paper, we present LIMBO - an Eclipse-based tool for modeling variable load intensity profiles based on the Descartes Load Intensity Model as an underlying modeling formalism.
[25] Andreas Weber, Nikolas Roman Herbst, Henning Groenda, and Samuel Kounev. Towards a Resource Elasticity Benchmark for Cloud Environments. In Proceedings of the 2nd International Workshop on Hot Topics in Cloud Service Scalability (HotTopiCS 2014), co-located with the 5th ACM/SPEC International Conference on Performance Engineering (ICPE 2014), Dublin, Ireland, March 22, 2014. ACM. March 2014. [ bib | slides | .pdf | Abstract ]
Auto-scaling features offered by today's cloud infrastructures provide increased flexibility especially for customers that experience high variations in the load intensity over time. However, auto-scaling features introduce new system quality attributes when considering their accuracy, timing, and boundaries. Therefore, distinguishing between different offerings has become a complex task, as it is not yet supported by reliable metrics and measurement approaches. In this paper, we discuss shortcomings of existing approaches for measuring and evaluating elastic behavior and propose a novel benchmark methodology specifically designed for evaluating the elasticity aspects of modern cloud platforms. The benchmark is based on open workloads with realistic load variation profiles that are calibrated to induce identical resource demand variations independent of the underlying hardware performance. Furthermore, we propose new metrics that capture the accuracy of resource allocations and de-allocations, as well as the timing aspects of an auto-scaling mechanism explicitly.
[26] Simon Spinner, Giuliano Casale, Xiaoyun Zhu, and Samuel Kounev. LibReDE: A Library for Resource Demand Estimation (Demonstration Paper). In Proceedings of the 5th ACM/SPEC International Conference on Performance Engineering (ICPE 2014), Dublin, Ireland, March 22-26, 2014. ACM. March 2014, Accepted for Publication. [ bib | Abstract ]
When creating a performance model, it is necessary to quantify the amount of resources consumed by an application serving individual requests. In distributed enterprise systems, these resource demands usually cannot be observed directly, their estimation is a major challenge. Different statistical approaches to resource demand estimation based on monitoring data have been proposed, e.g., using linear regression or Kalman filtering techniques. In this paper, we present LibReDE, a library of ready-to-use implementations of approaches to resource demand estimation that can be used for online and offline analysis. It is the first publicly available tool for this task and aims at supporting performance engineers during performance model construction. The library enables the quick comparison of the estimation accuracy of different approaches in a given context and thus helps to select an optimal one.
[27] Rouven Krebs, Philipp Schneider, and Nikolas Herbst. Optimization Method for Request Admission Control to Guarantee Performance Isolation. In Proceedings of the 2nd International Workshop on Hot Topics in Cloud Service Scalability (HotTopiCS 2014), co-located with the 5th ACM/SPEC International Conference on Performance Engineering (ICPE 2014), Dublin, Ireland, March 22, 2014. ACM. March 2014. [ bib | slides | .pdf | Abstract ]
Software-as-a-Service (SaaS) often shares one single application instance among different tenants to reduce costs. However, sharing potentially leads to undesired influence from one tenant onto the performance observed by the others. Furthermore, providing one tenant additional resources to support its increasing demands without increasing the performance of tenants who do not pay for it is a major challenge. The application intentionally does not manage hardware resources, and the OS is not aware of application level entities like tenants. Thus, it is difficult to control the performance of different tenants to keep them isolated. These problems gain importance as performance is one of the major obstacles for cloud customers. Existing work applies request based admission control mechanisms like a weighted round robin with an individual queue for each tenant to control the share guaranteed for a tenant. However, the computation of the concrete weights for such an admission control is still challenging. In this paper, we present a fitness function and optimization approach reflecting various requirements from this field to compute proper weights with the goal to ensure an isolated performance as foundation to scale on a tenants basis.
[28] Stefan Wesner, Henning Groenda, James Byrne, Sergej Svorobej, Christopher Hauser, and Jörg Domaschka. Optimised cloud data centre operation supported by simulation. In eChallenges e-2014 Conference Proceesings, Paul Cunningham and Miriam Cunningham, editors, 2014, volume 2014. IIMC International Information Management Corporation. [ bib ]
[29] Alberto Avritzer, Laura Carnevali, Lucia Happe, Anne Koziolek, Daniel Sadoc Menasche, Marco Paolieri, and Sindhu Suresh. A scalable approach to the assessment of storm impact in distributed automation power grids. In Quantitative Evaluation of Systems, 11th International Conference, QEST 2014, Florence, Italy, September 8-10, 2014, Proceedings, Gethin Norman and William Sanders, editors, 2014, volume 8657 of Lecture Notes in Computer Science, pages 345-367. Springer-Verlag Berlin Heidelberg. 2014. [ bib | http | Abstract ]
We present models and metrics for the survivability assessment of distribution power grid networks accounting for the impact of multiple failures due to large storms. The analytical models used to compute the proposed metrics are built on top of three design principles: state space factorization, state aggregation, and initial state conditioning. Using these principles, we build scalable models that are amenable to analytical treatment and efficient numerical solution. Our models capture the impact of using reclosers and tie switches to enable faster service restoration after large storms. We have evaluated the presented models using data from a real power distribution grid impacted by a large storm: Hurricane Sandy. Our empirical results demonstrate that our models are able to efficiently evaluate the impact of storm hardening investment alternatives on customer affecting metrics such as the expected energy not supplied until complete system recovery.
[30] Erik Burger, Jörg Henß, Martin Küster, Steffen Kruse, and Lucia Happe. View-Based Model-Driven Software Development with ModelJoin. Software & Systems Modeling, 15(2):472-496, 2014, Springer Berlin / Heidelberg. [ bib | DOI | .pdf ]
[31] Erik Burger, Jörg Henß, Steffen Kruse, Martin Küster, Andreas Rentschler, and Lucia Happe. ModelJoin. A Textual Domain-Specific Language for the Combination of Heterogeneous Models. Technical Report 1, Karlsruhe Institute of Technology, Faculty of Informatics, 2014. [ bib | http ]
[32] Lucia Happe, Erik Burger, Max Kramer, Andreas Rentschler, and Ralf Reussner. Completion and Extension Techniques for Enterprise Software Performance Engineering. In Future Business Software - Current Trends in Business Software Development, Gino Brunetti, Thomas Feld, Joachim Schnitter, Lutz Heuser, and Christian Webel, editors, Progress in IS, pages 117-131. Springer International Publishing, 2014. [ bib | DOI ]
[33] Lucia Happe and Anne Koziolek. A common analysis framework for smart distribution networks applied to security and survivability analysis (talk abstract). In Randomized Timed and Hybrid Models for Critical Infrastructures (Dagstuhl Seminar 14031), Dagstuhl Reports, Erika Ábrahám, Alberto Avritzer, Anne Remke, and William H. Sanders, editors, volume 4, pages 45-46. Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik, Dagstuhl, Germany, 2014. Issue 1. [ bib | DOI | http ]
[34] Christoph Heger and Robert Heinrich. Deriving work plans for solving performance and scalability problems. In Computer Performance Engineering, András Horváth and Katinka Wolter, editors, volume 8721 of Lecture Notes in Computer Science, pages 104-118. Springer International Publishing, 2014. [ bib | DOI | http ]
[35] Christoph Heger, Alexander Wert, and Roozbeh Farahbod. Vergil: Guiding developers through performance and scalability inferno. In The Ninth International Conference on Software Engineering Advances (ICSEA), Herwig Mannaert, Luigi Lavazza, Roy Oberhauser, Mira Kajko-Mattsson, and Michael Gebhart, editors, pages 598-608. IARIA, 2014. [ bib | http ]
[36] Georg Hinkel and Lucia Happe. Using component frameworks for model transformations by an internal DSL. In Proceedings of the 1st International Workshop on Model-Driven Engineering for Component-Based Software Systems co-located with ACM/IEEE 17th International Conference on Model Driven Engineering Languages & Systems (MoDELS 2014), 2014, volume 1281 of CEUR Workshop Proceedings, pages 6-15. CEUR-WS.org. 2014. [ bib | slides | .pdf | Abstract ]
To increase the development productivity, possibilities for reuse, maintainability and quality of complex model transformations, modularization techniques are indispensable. Component-Based Software Engineering targets the challenge of modularity and is well-established in languages like Java or C# with component models like .NET, EJB or OSGi. There are still many challenging barriers to overcome in current model transformation languages to provide comparable support for component-based development of model transformations. Therefore, this paper provides a pragmatic solution based on NMF Transformations, a model transformation language realized as an internal DSL embedded in C#. An internal DSL can take advantage of the whole expressiveness and tooling build for the well established and known host language. In this work, we use the component model of the .NET platform to represent reusable components of model transformations to support internal and external model transformation composition. The transformation components are hidden behind transformation rule interfaces that can be exchanged dynamically through configuration. Using this approach we illustrate the possibilities to tackle typical issues of integrity and versioning, such as detecting versioning conflicts for model transformations.
[37] Nikolas Roman Herbst, Nikolaus Huber, Samuel Kounev, and Erich Amrehn. Self-Adaptive Workload Classification and Forecasting for Proactive Resource Provisioning. Concurrency and Computation - Practice and Experience, Special Issue with extended versions of the best papers from ICPE 2013, John Wiley and Sons, Ltd., 2014. [ bib | DOI | http | Abstract ]
As modern enterprise software systems become increasingly dynamic, workload forecasting techniques are gaining in importance as a foundation for online capacity planning and resource management. Time series analysis covers a broad spectrum of methods to calculate workload forecasts based on history monitoring data. Related work in the field of workload forecasting mostly concentrates on evaluating specific methods and their individual optimisation potential or on predicting Quality-of-Service (QoS) metrics directly. As a basis, we present a survey on established forecasting methods of the time series analysis concerning their benefits and drawbacks and group them according to their computational overheads. In this paper, we propose a novel self-adaptive approach that selects suitable forecasting methods for a given context based on a decision tree and direct feedback cycles together with a corresponding implementation. The user needs to provide only his general forecasting objectives. In several experiments and case studies based on real world workload traces, we show that our implementation of the approach provides continuous and reliable forecast results at run-time. The results of this extensive evaluation show that the relative error of the individual forecast points is significantly reduced compared to statically applied forecasting methods, e.g. in an exemplary scenario on average by 37%. In a case study, between 55% and 75% of the violations of a given service level agreement can be prevented by applying proactive resource provisioning based on the forecast results of our implementation.
[38] Benjamin Klatt, Klaus Krogmann, and Michael Langhammer. Individual Code-Analyzes in Practice. In Proceedings of Software Engineering 2014 (SE2014), Nils Christian Ehmke Wilhelm Hasselbring, editor, January 2014, volume P-227 of Lecture Notes in Informatics (LNI). Kiel, Germany. [ bib | .pdf ]
[39] Heiko Koziolek, Steffen Becker, Jens Happe, Petr Tuma, and Thijmen de Gooijer. Towards software performance engineering for multicore and manycore systems. SIGMETRICS Perform. Eval. Rev., 41(3):2-11, 2014, ACM, New York, NY, USA. [ bib | DOI | http ]
[40] Max E. Kramer. Synchronizing heterogeneous models in a view-centric engineering approach. In Software Engineering 2014 - Fachtagung des GI-Fachbereichs Softwaretechnik, Wilhelm Hasselbring and Nils Christian Ehmke, editors, Kiel, Germany, 2014, volume 227 of GI Lecture Notes in Informatics, pages 233-236. Gesellschaft für Informatik e.V. (GI). 2014, Doctoral Symposium. [ bib | .pdf | .pdf ]
[41] Max E. Kramer and Michael Langhammer. Proposal for a multi-view modelling case study: Component-based software engineering with uml, plug-ins, and java. In Proceedings of the 2nd Workshop on View-Based, Aspect-Oriented and Orthographic Software Modelling, York, United Kingdom, 2014, VAO '14, pages 7:7-7:10. ACM, New York, NY, USA. 2014. [ bib | DOI | http | .pdf ]
[42] Max E. Kramer, Anton Hergenröder, Martin Hecker, Simon Greiner, and Kaibin Bao. Specification and verification of confidentiality in component-based systems. Poster at the 35th IEEE Symposium on Security and Privacy, 2014. [ bib | .pdf | .pdf ]
[43] Michael Langhammer and Max E. Kramer. Determining the intent of code changes to sustain attached model information during code evolution. In Fachgruppenbericht des 2. Workshops “Modellbasierte und Modellgetriebene Softwaremodernisierung”, volume 34 (2) of Softwaretechnik-Trends. Gesellschaft für Informatik e.V. (GI), 2014. [ bib | http | .pdf ]
[44] Steffen Becker, Wilhelm Hasselbring, Andre van Hoorn, Samuel Kounev, Ralf Reussner, et al. Proceedings of the 2014 symposium on software performance (sosp'14): Joint descartes/kieker/palladio days. 2014, Stuttgart, Germany, Universität Stuttgart. [ bib ]
[45] Tomás Martinec, Lukás Marek, Antonín Steinhauser, Petr Tůma, Qais Noorshams, Andreas Rentschler, and Ralf Reussner. Constructing performance model of jms middleware platform. In Proceedings of the 5th ACM/SPEC International Conference on Performance Engineering, Dublin, Ireland, 2014, ICPE '14, pages 123-134. ACM, New York, NY, USA. 2014. [ bib | DOI | http ]
[46] Daniel Sadoc Menasché, Alberto Avritzer, Sindhu Suresh, Rosa M. Leão, Edmundo de Souza e Silva, Morganna Diniz, Kishor Trivedi, Lucia Happe, and Anne Koziolek. Assessing survivability of smart grid distribution network designs accounting for multiple failures. Concurrency and Computation: Practice and Experience, 26(12):1949-1974, 2014. [ bib | DOI | http | .pdf | Abstract ]
Smart grids are fostering a paradigm shift in the realm of power distribution systems. Whereas traditionally different components of the power distribution system have been provided and analyzed by different teams through different lenses, smart grids require a unified and holistic approach that takes into consideration the interplay of communication reliability, energy backup, distribution automation topology, energy storage, and intelligent features such as automated fault detection, isolation, and restoration (FDIR) and demand response. In this paper, we present an analytical model and metrics for the survivability assessment of the distribution power grid network. The proposed metrics extend the system average interruption duration index, accounting for the fact that after a failure, the energy demand and supply will vary over time during a multi-step recovery process. The analytical model used to compute the proposed metrics is built on top of three design principles: state space factorization, state aggregation, and initial state conditioning. Using these principles, we reduce a Markov chain model with large state space cardinality to a set of much simpler models that are amenable to analytical treatment and efficient numerical solution. In case demand response is not integrated with FDIR, we provide closed form solutions to the metrics of interest, such as the mean time to repair a given set of sections. Under specific independence assumptions, we show how the proposed methodology can be adapted to account for multiple failures. We have evaluated the presented model using data from a real power distribution grid, and we have found that survivability of distribution power grids can be improved by the integration of the demand response feature with automated FDIR approaches. Our empirical results indicate the importance of quantifying survivability to support investment decisions at different parts of the power grid distribution network.
[47] Qais Noorshams, Kiana Rostami, Samuel Kounev, and Ralf Reussner. Modeling of I/O Performance Interference in Virtualized Environments with Queueing Petri Nets. In Proceedings of the IEEE 22nd International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems, France, Paris, 2014, MASCOTS '14. [ bib | .pdf ]
[48] Qais Noorshams, Roland Reeb, Andreas Rentschler, Samuel Kounev, and Ralf Reussner. Enriching software architecture models with statistical models for performance prediction in modern storage environments. In Proceedings of the 17th International ACM Sigsoft Symposium on Component-based Software Engineering, Marcq-en-Bareul, France, 2014, CBSE '14, pages 45-54. ACM, New York, NY, USA. 2014, Acceptance Rate (Full Paper): 14/62 = 23%. [ bib | DOI | http | .pdf ]
[49] Qais Noorshams, Axel Busch, Andreas Rentschler, Dominik Bruhn, Samuel Kounev, Petr Tůma, and Ralf Reussner. Automated Modeling of I/O Performance and Interference Effects in Virtualized Storage Systems. In 34th IEEE International Conference on Distributed Computing Systems Workshops (ICDCS 2014 Workshops). 4th International Workshop on Data Center Performance, DCPerf '14, Madrid, Spain, 2014, pages 88-93. [ bib | DOI | http | .pdf ]
[50] Andreas Rentschler and Per Sterner. Interactive Dependency Graphs for Model Transformation Analysis. In Joint Proceedings of MODELS'13 Invited Talks, Demonstration Session, Poster Session, and ACM Student Research Competition co-located with the 16th International Conference on Model Driven Engineering Languages and Systems (MODELS '13), Miami, USA, September 29 - October 4, 2013, Yan Liu and Steffen Zschaler, editors, January 2014, volume 1115 of CEUR Workshop Proceedings, pages 36-40. CEUR-WS.org. January 2014. [ bib | http | .pdf ]
[51] Stephan Seifermann. Model-driven co-evolution of contracts, unit-tests and source-code. Master's thesis, Karlsruhe Institute of Technology (KIT), Germany, 2014. [ bib ]
[52] Catia Trubiani, Anne Koziolek, Vittorio Cortellessa, and Ralf Reussner. Guilt-based handling of software performance antipatterns in Palladio architectural models. Journal of Systems and Software, 95:141 - 165, 2014. [ bib | DOI | http | .pdf | Abstract ]
Antipatterns are conceptually similar to patterns in that they document recurring solutions to common design problems. Software Performance Antipatterns document common performance problems in the design as well as their solutions. The definition of performance antipatterns concerns software properties that can include static, dynamic, and deployment aspects. To make use of such knowledge, we propose an approach that helps software architects to identify and solve performance antipatterns. Our approach provides software performance feedback to architects, since it suggests the design alternatives that allow overcoming the detected performance problems. The feedback process may be quite complex since architects may have to assess several design options before achieving the architectural model that best fits the end-user expectations. In order to optimise such process we introduce a ranking methodology that identifies, among a set of detected antipatterns, the “guilty” ones, i.e. the antipatterns that more likely contribute to the violation of specific performance requirements. The introduction of our ranking process leads the system to converge towards the desired performance improvement by discarding a consistent part of design alternatives. Four case studies in different application domains have been used to assess the validity of the approach.
[53] Alexander Wert, Marius Oehler, Christoph Heger, and Roozbeh Farahbod. Automatic Detection of Performance Anti-patterns in Inter-component Communications. In Proceedings of the 10th International Conference on Quality of Software Architecture, Lille, France, 2014, QoSA '14. Acceptance Rate: 27%. [ bib | http ]
[54] Rebekka Wohlrab, Thijmen de Gooijer, Anne Koziolek, and Steffen Becker. Experience of pragmatically combining RE methods for performance requirements in industry. In Proceedings of the 22nd IEEE International Requirements Engineering Conference (RE), Aug 2014, pages 344-353. [ bib | DOI | .pdf | Abstract ]
To meet end-user performance expectations, precise performance requirements are needed during development and testing, e.g., to conduct detailed performance and load tests. However, in practice, several factors complicate performance requirements elicitation: lacking skills in performance requirements engineering, outdated or unavailable functional specifications and architecture models, the specification of the system's context, lack of experience to collect good performance requirements in an industrial setting with very limited time, etc. From the small set of available non-functional requirements engineering methods, no method exists that alone leads to precise and complete performance requirements with feasible effort and which has been reported to work in an industrial setting. In this paper, we present our experiences in combining existing requirements engineering methods into a performance requirements method called PROPRE. It has been designed to require no up-to-date system documentation and to be applicable with limited time and effort. We have successfully applied PROPRE in an industrial case study from the process automation domain. Our lessons learned show that the stakeholders gathered good performance requirements which now improve performance testing.
[55] Fabian Brosig, Nikolaus Huber, and Samuel Kounev. Architecture-Level Software Performance Abstractions for Online Performance Prediction. Elsevier Science of Computer Programming Journal (SciCo), 90, Part B:71 - 92, 2014, Elsevier. [ bib | DOI | http | .pdf ]
[56] Nikolaus Huber, André van Hoorn, Anne Koziolek, Fabian Brosig, and Samuel Kounev. Modeling Run-Time Adaptation at the System Architecture Level in Dynamic Service-Oriented Environments. Service Oriented Computing and Applications Journal (SOCA), 8(1):73-89, 2014, Springer London. [ bib | DOI | .pdf ]
[57] Zoya Durdik and Ralf Reussner. On the Appropriate Rationale for Using Design Patterns and Pattern Documentation. In Proceedings of Software Engineering (SE2014), 2014. [ bib ]
[58] Christian Stier. Transaction-Aware Software Performance Prediction. Master's thesis, Karlsruhe Institute of Technology (KIT), Am Fasanengarten 5, 76131 Karlsruhe, Germany, January 2014. [ bib | .pdf ]
[59] Fabian Gorsler, Fabian Brosig, and Samuel Kounev. Performance queries for architecture-level performance models. In Proceedings of the 5th ACM/SPEC International Conference on Performance Engineering (ICPE 2014), Dublin, Ireland, 2014. ACM, New York, NY, USA. 2014, Accepted for publication. Acceptance Rate (Full Paper): 29%. [ bib ]
[60] Philipp Merkle and Christian Stier. Modelling Database Lock-Contention in Architecture-level Performance Simulation. In Proceedings of the 5th ACM/SPEC International Conference on Performance Engineering, Dublin, Ireland, 2014, ICPE '14. ACM, New York, NY, USA. 2014, Work-In-Progress Paper. [ bib ]
[61] Piotr Rygielski and Samuel Kounev. Data Center Network Throughput Analysis using Queueing Petri Nets. In 34th IEEE International Conference on Distributed Computing Systems Workshops (ICDCS 2014 Wokrshops). 4th International Workshop on Data Center Performance, (DCPerf 2014), Madrid, Spain, 2014. (Paper accepted for publication). [ bib ]
[62] Viliam Simko, David Hauzar, Petr Hnetynka, Tomas Bures, and Frantisek Plasil. Formal verification of annotated textual use-cases. The Computer Journal, 2014. accepted for publication. [ bib | DOI | Abstract ]
Textual use-cases have been traditionally used in the initial stages of the software development process to describe software functionality from the user's perspective. Their advantage is that they can be easily understood by stakeholders and domain experts. However, since use-cases typically rely on natural language, they cannot be directly subject to a formal verification. In this article, we present a method (called FOAM) for formal verification of use-cases. This method features simple user-definable annotations, which are inserted into a use-case to make its semantics more suitable for verification. Subsequently a model-checking tool is employed to verify temporal invariants associated with the annotations. This way, FOAM allows harnessing the benefits of model-checking while still keeping the use-cases understandable for non-experts.
[63] P-O Östberg, Henning Groenda, Stefan Wesner, James Byrne, Dimitrios S. Nikolopoulos, Craig Sheridan, Jakub Krzywda, Ahmed Ali-Eldin, Johan Tordsson, Erik Elmroth, Christian Stier, Klaus Krogmann, Jörg Domaschka, Christopher Hauser, PJ Byrne, Sergej Svorobej, Barry McCollum, Zafeirios Papazachos, Loke Johannessen, Stephan Rüth, and Dragana Paurevic. The CACTOS Vision of Context-Aware Cloud Topology Optimization and Simulation. In Proceedings of the Sixth IEEE International Conference on Cloud Computing Technology and Science (CloudCom), 2014, pages 26-31. IEEE Computer Society, Singapore. 2014. [ bib | DOI ]
[64] Benjamin Klatt. Consolidation of Customized Product Copies into Software Product Lines. PhD thesis, Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany, 2014. [ bib | http | Abstract ]
Copy-based customization is a widespread technique to serve individual customer needs with existing software solutions. To cope with long term disadvantages resulting from this practice, this dissertation developed an approach to support the consolidation of such copies into a Software Product Line with a future-compliant product base providing managed variability.
[65] Robert Heinrich, Eric Schmieders, Reiner Jung, Kiana Rostami, Andreas Metzger, Wilhelm Hasselbring, Ralf H. Reussner, and Klaus Pohl. Integrating run-time observations and design component models for cloud system analysis. In Proceedings of the 9th Workshop on Models@run.time co-located with 17th International Conference on Model Driven Engineering Languages and Systems (MODELS 2014), Valencia, Spain, September 30, 2014., 2014, pages 41-46. [ bib | .pdf ]
[66] Robert Heinrich. Aligning Business Processes and Information Systems: New Approaches to Continuous Quality Engineering. Springer, 2014. [ bib | DOI ]
[67] Klaus Schmid, Wolfgang Böhm, Robert Heinrich, Andrea Herrmann, Anne Hoffmann, Dieter Landes, Marco Konersmann, Thomas Ruhroth, Oliver Sander, Volker Stolz, Baltasar Trancón y Widemann, and Rüdiger Weißbach, editors. Gemeinsamer Tagungsband der Workshops der Tagung Software Engineering 2014, 25.-26. Februar 2014 in Kiel, Deutschland, volume 1129 of CEUR Workshop Proceedings. CEUR-WS.org, 2014. [ bib | http ]
[68] Robert Heinrich, Kathrin Kirchner, and Rüdiger Weißbach, editors. 1st IEEE International Workshop on the Interrelations between Requirements Engineering and Business Process Management, REBPM 2014, Karlskrona, Sweden, August 25, 2014. IEEE, 2014. [ bib | http ]

 TOP

Publications 2013

[1] Misha Strittmatter, Philipp Merkle, Andreas Rentschler, and Michael Langhammer. Towards a modular palladio component model. In Proceedings of the Symposium on Software Performance: Joint Kieker/Palladio Days, Steffen Becker, Wilhelm Hasselbring, André van Hoorn, and Ralf Reussner, editors, Karlsruhe, Germany, November 27-29, 2013, volume 1083, pages 49-58. CEUR Workshop Proceedings. November 2013. [ bib | slides | http | .pdf ]
[2] Fabian Gorsler, Fabian Brosig, and Samuel Kounev. Controlling the palladio bench using the descartes query language. In Proceedings of the Symposium on Software Performance: Joint Kieker/Palladio Days (KPDAYS 2013), Steffen Becker, Wilhelm Hasselbring, André van Hoorn, and Ralf Reussner, editors, November 2013, number 1083, pages 109-118. CEUR-WS.org, Aachen, Germany. November 2013. [ bib | http | .pdf | Abstract ]
The Palladio Bench is a tool to model, simulate and analyze Palladio Component Model (PCM) instances. However, for the Palladio Bench, no single interface to automate experiments or Application Programming Interface (API) to trigger the simulation of PCM instances and to extract performance prediction results is available. The Descartes Query Language (DQL) is a novel approach of a declarative query language to integrate different performance modeling and prediction techniques behind a unifying interface. Users benefit from the abstraction of specific tools to prepare and trigger performance predictions, less effort to obtain performance metrics of interest, and means to automate performance predictions. In this paper, we describe the realization of a DQL Connector for PCM and demonstrate the applicability of our approach in a case study.
[3] Piotr Rygielski, Samuel Kounev, and Steffen Zschaler. Model-Based Throughput Prediction in Data Center Networks. In Proceedings of the 2nd IEEE International Workshop on Measurements and Networking (M&N 2013), Naples, Italy, October 7-8, 2013, pages 167-172. [ bib | .pdf ]
[4] Jürgen Walter. Parallel Simulation of Queueing Petri Net Models. Diploma thesis, Karlsruhe Institute of Technology (KIT), Am Fasanengarten 5, 76131 Karlsruhe, Germany, October 2013. [ bib | .pdf | Abstract ]
For years the CPU clock frequency was the key to improve processor performance. Nowadays, modern processors enable performance improvements by increasing the number of cores. However, existing software needs to be adapted to be able to utilize multiple cores. Such an adaptation poses many challenges in the field of discrete-event software simulation. Decades of intensive research have been spent to find a general solution for parallel discrete event simulation. In this context, QNs and PNs have been extensively studied. However, to the best of our knowledge, there is only one previous work that considers the concurrent simulation of QPN [Juergens1997]. This work focuses on comparing different synchronization algorithms and excludes a majority of lookahead calculation and net decomposition. In this thesis, we build upon and extend this work. For this purpose, we adapted and extended findings from QNs, PNs and parallel simulation in general. We apply our findings to SimQPN, which is a sequential simulation engine for QPN. Among other application areas, SimQPN is currently applied to online performance prediction for which a speedup due to parallelization is desirable. We present a parallel SimQPN implementation that employs application level and event level parallelism. A validation ensures the functional correctness of the new parallel implementations. The parallelization of multiple runs enables almost linear speedup. We parallelized the execution of a single run by the use of a conservative barrier-based synchronization algorithm. The speedup for a single run depends on the capability of the model. Hence, a number of experiments on different net characteristics were conducted showing that for certain models a superlinear speedup is possible.
[5] Fabian Brosig, Fabian Gorsler, Nikolaus Huber, and Samuel Kounev. Evaluating Approaches for Performance Prediction in Virtualized Environments (Short Paper). In Proceedings of the IEEE 21st International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems (MASCOTS 2013), San Francisco, USA, August 14-16, 2013. [ bib | .pdf ]
[6] Rouven Krebs, Alexander Wert, and Samuel Kounev. Multi-Tenancy Performance Benchmark for Web Application Platforms industrial track. In Proceedings of the 13th International Conference on Web Engineering (ICWE 2013), Aalborg, Denmark, July 8-12, 2013. Aalborg University, Denmark, Springer-Verlag. July 2013. [ bib | .pdf ]
[7] Fabian Gorsler. Online Performance Queries for Architecture-Level Performance Models. Master's thesis, Karlsruhe Institute of Technology (KIT), Am Fasanengarten 5, 76131 Karlsruhe, Germany, July 2013. [ bib | .pdf ]
[8] Lorenzo Blasi, Gunnar Brataas, Michael Boniface, Joe Butler, Francesco D'andria, Michel Drescher, Ricardo Jimenez, Klaus Krogmann, George Kousiouris, Bastian Koller, Giada Landi, Francesco Matera, Andreas Menychtas, Karsten Oberle, Stephen Phillips, Luca Rea, Paolo Romano, Michael Symonds, and Wolfgang Ziegler. Cloud Computing Service Level Agreements - Exploitation of Research Results. European Commission Directorate General Communications Networks, Content and Technology, June 2013. [ bib | http | Abstract ]
The rapid evolution of the cloud market is leading to the emergence of new services, new ways for service provisioning and new interaction and collaboration models both amongst cloud providers and service ecosystems exploiting cloud resources. Service Level Agreements (SLAs) govern the aforementioned relationships by defining the terms of engagement for the participating entities.
[9] Nikolas Roman Herbst, Samuel Kounev, and Ralf Reussner. Elasticity in Cloud Computing: What it is, and What it is Not (Short Paper). In Proceedings of the 10th International Conference on Autonomic Computing (ICAC 2013), San Jose, CA, June 24-28, 2013. USENIX. June 2013, Acceptance Rate (Short Paper): 36.9%. [ bib | slides | http | .pdf | Abstract ]
Originating from the field of physics and economics, the term elasticity is nowadays heavily used in the context of cloud computing. In this context, elasticity is commonly understood as the ability of a system to automatically provision and de-provision computing resources on demand as workloads change. However, elasticity still lacks a precise definition as well as representative metrics coupled with a benchmarking methodology to enable comparability of systems. Existing definitions of elasticity are largely inconsistent and unspecific leading to confusion in the use of the term and its differentiation from related terms such as scalability and efficiency; the proposed measurement methodologies do not provide means to quantify elasticity without mixing it with efficiency or scalability aspects. In this short paper, we propose a precise definition of elasticity and analyze its core properties and requirements explicitly distinguishing from related terms such as scalability, efficiency, and agility. Furthermore, we present a set of appropriate elasticity metrics and sketch a new elasticity tailored benchmarking methodology addressing the special requirements on workload design and calibration.
[10] Benjamin Klatt and Martin Küster. Improving Product Copy Consolidation by Component-Architecture-Based Difference and Variation Point Analysis. In 9th International ACM Sigsoft Conference on the Quality of Software Architectures (QoSA'13), June 2013, pages 117-122. ACM, New York, NY, USA. June 2013. [ bib | .pdf ]
[11] Aleksandar Milenkoski, Samuel Kounev, Alberto Avritzer, Nuno Antunes, and Marco Vieira. On Benchmarking Intrusion Detection Systems in Virtualized Environments. Technical Report SPEC-RG-2013-002 v.1.0, SPEC Research Group - IDS Benchmarking Working Group, Standard Performance Evaluation Corporation (SPEC), 7001 Heritage Village Plaza Suite 225, Gainesville, VA 20155, June 2013. [ bib | .pdf | Abstract ]
Modern intrusion detection systems (IDSes) for virtualized environments are deployed in the virtualization layer with components inside the virtual machine monitor (VMM) and the trusted host virtual machine (VM). Such IDSes can monitor at the same time the network and host activities of all guest VMs running on top of a VMM being isolated from malicious users of these VMs. We refer to IDSes for virtualized environments as VMM-based IDSes. In this work, we analyze state-of-the-art intrusion detection techniques applied in virtualized environments and architectures of VMM-based IDSes. Further, we identify challenges that apply specifically to benchmarking VMM-based IDSes focussing on workloads and metrics. For example, we discuss the challenge of de ning representative baseline benign workload profiles as well as the challenge of de ning malicious workloads containing attacks targeted at the VMM. We also discuss the impact of on-demand resource provisioning features of virtualized environments (e.g., CPU and memory hotplugging, memory ballooning) on IDS benchmarking measures such as capacity and attack detection accuracy. Finally, we outline future research directions in the area of benchmarking VMM-based IDSes and of intrusion detection in virtualized environments in general.
[12] Andreas Rentschler, Qais Noorshams, Lucia Happe, and Ralf Reussner. Interactive Visual Analytics for Efficient Maintenance of Model Transformations. In Proceedings of the 6th International Conference on Model Transformation (ICMT '13), Budapest, Hungary, Keith Duddy and Gerti Kappel, editors, June 2013, volume 7909 of Lecture Notes in Computer Science, pages 141-157. Springer-Verlag Berlin Heidelberg. June 2013, Acceptance Rate: 20.7%. [ bib | DOI | http | .pdf ]
[13] Samuel Kounev, Kai Sachs, and Piotr Rygielski. SPEC Research Group Newsletter, vol. 1 no. 2, June 2013. Published by Standard Performance Evaluation Corporation (SPEC). [ bib | .html | .pdf ]
[14] Zoya Durdik, Anne Koziolek, and Ralf Reussner. How the Understanding of the Effects of Design Decisions Informs Requirements Engineering. In Proceedings of the 2nd International Workshop on the Twin Peaks of Requirements and Architecture (TwinPeaks), May 2013, pages 14-18. [ bib | DOI ]
[15] Benjamin Klatt, Martin Küster, Klaus Krogmann, and Oliver Burkhardt. A Change Impact Analysis Case Study: Replacing the Input Data Model of SoMoX. In 15th Workshop Software-Reengineering (WSR'13), May 2013. Bad Honnef, Germany. [ bib | .pdf ]
[16] Benjamin Klatt, Martin Küster, Klaus Krogmann, and Oliver Burkhardt. A Change Impact Analysis Case Study: Replacing the Input Data Model of SoMoX. Softwaretechnik-Trends, 33(2):53-54, May 2013, Köllen Druck & Verlag GmbH. [ bib | .pdf | Abstract ]
Change impact analysis aims to provide insights about efforts and effects of a change to be expected, and to prevent missed adaptations. However, the benefit of applying an analysis in a given scenario is not clear. Only a few studies about change impact analysis ap- proaches compare the actual effort spent implement- ing the change with the prediction of the analysis. To gain more insight about change impact analysis benefits, we have performed a case study on chang- ing a software's input data model. We have applied two analyses, using the Java compiler and a depen- dency graph based approach, before implementing the actual change. In this paper, we present the re- sults, showing that i) syntactically required changes have been predicted adequately, iii) changes required for semantical correctness required the major effort but were not predicted at all, and iii) tool support for change impact analysis still needs to be improved.
[17] Samuel Kounev, Christoph Rathfelder, and Benjamin Klatt. Modeling of Event-based Communication in Component-based Architectures: State-of-the-Art and Future Directions. Electronic Notes in Theoretical Computer Science (ENTCS), 295:3-9, May 2013, Elsevier Science Publishers B. V., Amsterdam, The Netherlands. [ bib | slides | http | .pdf | Abstract ]
Event-based communication is used in different domains including telecommunications, transportation, and business information systems to build scalable distributed systems. Such systems typically have stringent requirements for performance and scalability as they provide business and mission critical services. While the use of event-based communication enables loosely-coupled interactions between components and leads to improved system scalability, it makes it much harder for developers to estimate the system's behavior and performance under load due to the decoupling of components and control flow. We present an overview on our approach enabling the modeling and performance prediction of event-based system at the architecture level. Applying a model-to-model transformation, our approach integrates platform-specific performance influences of the underlying middleware while enabling the use of different existing analytical and simulation-based prediction techniques. The results of two real world case studies demonstrate the effectiveness, practicability and accuracy of the proposed modeling and prediction approach.
[18] Manuel Loesch and Rouven Krebs. Conceptual Approach for Performance Isolation in Multi-Tenant Systems short paper. In Proceedings of the 3rd International Conference on Cloud Computing and Service Science (CLOSER 2013), Aachen, Germany, May 8-10, 2013. RWTH Aachen, Germany, SciTePress. May 2013. [ bib | .pdf ]
[19] Axel Busch. Workload Characterization for I/O Performance Analysis on IBM System z. Master's thesis, Karlsruhe Institute of Technology (KIT), Am Fasanengarten 5, 76131 Karlsruhe, Germany, May 2013. [ bib | .pdf ]
[20] Nikolas Roman Herbst, Nikolaus Huber, Samuel Kounev, and Erich Amrehn. Self-Adaptive Workload Classification and Forecasting for Proactive Resource Provisioning. In Proceedings of the 4th ACM/SPEC International Conference on Performance Engineering (ICPE 2013), Prague, Czech Republic, April 21-24, 2013, pages 187-198. ACM, New York, NY, USA. April 2013. [ bib | DOI | slides | http | .pdf | Abstract ]
As modern enterprise software systems become increasingly dynamic, workload forecasting techniques are gaining in importance as a foundation for online capacity planning and resource management. Time series analysis covers a broad spectrum of methods to calculate workload forecasts based on history monitoring data. Related work in the field of workload forecasting mostly concentrates on evaluating specific methods and their individual optimisation potential or on predicting Quality-of-Service (QoS) metrics directly. As a basis, we present a survey on established forecasting methods of the time series analysis concerning their benefits and drawbacks and group them according to their computational overheads. In this paper, we propose a novel self-adaptive approach that selects suitable forecasting methods for a given context based on a decision tree and direct feedback cycles together with a corresponding implementation. The user needs to provide only his general forecasting objectives. In several experiments and case studies based on real world workload traces, we show that our implementation of the approach provides continuous and reliable forecast results at run-time. The results of this extensive evaluation show that the relative error of the individual forecast points is significantly reduced compared to statically applied forecasting methods, e.g. in an exemplary scenario on average by 37%. In a case study, between 55% and 75% of the violations of a given service level agreement can be prevented by applying proactive resource provisioning based on the forecast results of our implementation.
[21] Martin Küster and Benjamin Klatt. Generation App - App Generation. VKSI Magazin, (8), April 2013, Karlsruhe, Germany. [ bib | http | .pdf ]
[22] Samuel Kounev, Stamatia Rizou, Steffen Zschaler, Spiros Alexakis, Tomas Bures, Jean-Marc Jézéquel, Dimitrios Kourtesis, and Stelios Pantelopoulos. RELATE: A Research Training Network on Engineering and Provisioning of Service-Based Cloud Applications. In International Workshop on Hot Topics in Cloud Services (HotTopiCS 2013), Prague, Czech Republic, April 20-21, 2013. [ bib ]
[23] Aleksandar Milenkoski, Alexandru Iosup, Samuel Kounev, Kai Sachs, Piotr Rygielski, Jason Ding, Walfredo Cirne, and Florian Rosenberg. Cloud Usage Patterns: A Formalism for Description of Cloud Usage Scenarios. Technical Report SPEC-RG-2013-001 v.1.0.1, SPEC Research Group - Cloud Working Group, Standard Performance Evaluation Corporation (SPEC), 7001 Heritage Village Plaza Suite 225, Gainesville, VA 20155, April 2013. [ bib | .pdf | Abstract ]
Cloud computing is becoming an increasingly lucrative branch of the existing information and communication technologies (ICT). Enabling a debate about cloud usage scenarios can help with attracting new customers, sharing best-practices, and designing new cloud services. In contrast to previous approaches, which have attempted mainly to formalize the common service delivery models (i.e., Infrastructure-as-a-Service, Platform-as-a-Service, and Software-as-a-Service), in this work, we propose a formalism for describing common cloud usage scenarios referred to as cloud usage patterns. Our formalism takes a structuralist approach allowing decomposition of a cloud usage scenario into elements corresponding to the common cloud service delivery models. Furthermore, our formalism considers several cloud usage patterns that have recently emerged, such as hybrid services and value chains in which mediators are involved, also referred to as value chains with mediators. We propose a simple yet expressive textual and visual language for our formalism, and we show how it can be used in practice for describing a variety of real-world cloud usage scenarios. The scenarios for which we demonstrate our formalism include resource provisioning of global providers of infrastructure and/or platform resources, online social networking services, user-data processing services, online customer and ticketing services, online asset management and banking applications, CRM (Customer Relationship Management) applications, and online social gaming applications.
[24] Piotr Rygielski, Steffen Zschaler, and Samuel Kounev. A Meta-Model for Performance Modeling of Dynamic Virtualized Network Infrastructures (Work-In-Progress Paper). In Proceedings of the 4th ACM/SPEC International Conference on Performance Engineering (ICPE 2013), Prague, Czech Republic, April 21-24, 2013, pages 327-330. ACM, New York, NY, USA. April 2013, Work-In-Progress Paper. [ bib | http | .pdf ]
[25] Samuel Kounev, Steffen Zschaler, and Kai Sachs, editors. Proceedings of the 2013 International Workshop on Hot Topics in Cloud Services (HotTopiCS 2013). ACM, April 2013. [ bib ]
[26] Benjamin Klatt, Martin Küster, and Klaus Krogmann. A Graph-Based Analysis Concept to Derive a Variation Point Design from Product Copies. In Proceedings of the 1st International workshop on Reverse Variability Engineering (REVE'13), March 2013, pages 1-8. Genua, Italy. [ bib | .pdf ]
[27] Christoph Rathfelder, Benjamin Klatt, Kai Sachs, and Samuel Kounev. Modeling event-based communication in component-based software architectures for performance predictions. Software and Systems Modeling, 13(4):1291-1317, March 2013, Springer Verlag. [ bib | DOI | http | .pdf | Abstract ]
Event-based communication is used in different domains including telecommunications, transportation, and business information systems to build scalable distributed systems. Such systems typically have stringent requirements for performance and scalability as they provide business and mission critical services. While the use of event-based communication enables loosely-coupled interactions between components and leads to improved system scalability, it makes it much harder for developers to estimate the system's behavior and performance under load due to the decoupling of components and control flow. In this paper, we present our approach enabling the modeling and performance prediction of event-based systems at the architecture level. Applying a model-to-model transformation, our approach integrates platform-specific performance influences of the underlying middleware while enabling the use of different existing analytical and simulation-based prediction techniques. In summary, the contributions of this paper are: (1) the development of a meta-model for event-based communication at the architecture level, (2) a platform aware model-to-model transformation, and (3) a detailed evaluation of the applicability of our approach based on two representative real-world case studies. The results demonstrate the effectiveness, practicability and accuracy of the proposed modeling and prediction approach.
[28] Misha Strittmatter. Feedback-driven concurrency improvement and refinement of performance models. Diploma thesis, Karlsruhe Institute of Technology (KIT), Germany, March 2013. [ bib | slides | .pdf ]
[29] Piotr Rygielski and Samuel Kounev. Network Virtualization for QoS-Aware Resource Management in Cloud Data Centers: A Survey. PIK - Praxis der Informationsverarbeitung und Kommunikation, 36(1):55-64, February 2013, de Gruyter. [ bib | DOI | http | .pdf ]
[30] Simon Spinner, Samuel Kounev, Xiaoyun Zhu, and Mustafa Uysal. Towards Online Performance Model Extraction in Virtualized Environments (position paper). In Proceedings of the 8th Workshop on Models @ Run.time (MRT 2013), Nelly Bencomo, Robert France, Sebastian Götz, and Bernhard Rumpe, editors, Miami, Florida, USA, 2013, pages 89-95. CEUR-WS. 2013. [ bib | .pdf | Abstract ]
Virtualization increases the complexity and dynamics of modern software architectures making it a major challenge to manage the end-to-end performance of applications. Architecture-level performance models can help here as they provide the modeling power and analysis fexibility to predict the performance behavior of applications under varying workloads and configurations. However, the construction of such models is a complex and time-consuming task. In this position paper, we discuss how the existing concept of virtual appliances can be extended to automate the extraction of architecture-level performance models during system operation.
[31] Henning Groenda. Certifying Software Component Performance Specifications, volume 11 of The Karlsruhe Series on Software Design and Quality. KIT Scientific Publishing, 2013. [ bib | DOI | http | http | Abstract ]
In component-based software engineering, performance prediction approaches support the design of business information systems on the architectural level. They are based on behavior specifications of components. This work presents a round-trip approach for using, assessing, and certifying the accuracy of parameterized, probabilistic, deterministic, and concurrent performance specifications. Its applicability and effectiveness are demonstrated using the CoCoME benchmark.
[32] Aldeida Aleti, Barbora Buhnova, Lars Grunske, Anne Koziolek, and Indika Meedeniya. Software architecture optimization methods: A systematic literature review. IEEE Transactions on Software Engineering, 39(5):658-683, 2013, IEEE. [ bib | DOI | .pdf | Abstract ]
Due to significant industrial demands toward software systems with increasing complexity and challenging quality requirements, software architecture design has become an important development activity and the research domain is rapidly evolving. In the last decades, software architecture optimization methods, which aim to automate the search for an optimal architecture design with respect to a (set of) quality attribute(s), have proliferated. However, the reported results are fragmented over different research communities, multiple system domains, and multiple quality attributes. To integrate the existing research results, we have performed a systematic literature review and analyzed the results of 188 research papers from the different research communities. Based on this survey, a taxonomy has been created which is used to classify the existing research. Furthermore, the systematic analysis of the research literature provided in this review aims to help the research community in consolidating the existing research efforts and deriving a research agenda for future developments.
[33] Alberto Avritzer, Sindhu Suresh, Daniel Sadoc Menasché, Rosa Maria Meri Leão, Edmundo de Souza e Silva, Morganna Carmem Diniz, Kishor Trivedi, Lucia Happe, and Anne Koziolek. Survivability models for the assessment of smart grid distribution automation network designs. In Proceedings of the fourth ACM/SPEC International Conference on Performance Engineering (ICPE 2013), Prague, Czech Republic, 2013, ICPE '13, pages 241-252. ACM, New York, NY, USA, New York, NY, USA. 2013. [ bib | DOI | http | .pdf ]
[34] Steffen Becker, Raffaela Mirandola, Lucia Happe, and Catia Trubiani. Towards a methodology driven by dependencies of quality attributes for QoS-based analysis. In Proceedings of the 4th Joint ACM/SPEC International Conference on Performance Engineering (ICPE '13), Work-In-Progress Track, Prague, Chech Repbulic, 2013. ACM, New York, NY, USA. 2013. [ bib ]
[35] Erik Burger. Flexible Views for View-Based Model-Driven Development. In Proceedings of the 18th international doctoral symposium on Components and architecture, Vancouver, British Columbia, Canada, 2013, WCOP '13, pages 25-30. ACM, New York, NY, USA. 2013. [ bib | DOI | http | .pdf ]
[36] Erik Burger. Flexible Views for Rapid Model-Driven Development. In Proceedings of the 1st Workshop on View-Based, Aspect-Oriented and Orthographic Software Modelling, Montpellier, France, 2013, VAO '13, pages 1:1-1:5. ACM, New York, NY, USA. 2013. [ bib | DOI | http | .pdf ]
[37] Klaus Krogmann, Markus Kremer, Ralf Knobloch, Siegfried Florek, and Rainer Schmidt. Serviceorientierte Architekturen in der Cloud - Leitfaden und Nachschlagewerk, chapter Technische Konzepte, pages 64-77. BITKOM, version 1.0 edition, 2013. [ bib | http | .pdf | Abstract ]
Das Kapitel beleuchtet wichtige technische Konzepte, die für das Cloud-Computing essentiell sind. Das Verstaendnis für die durch diese Konzepte geloesten Fragestellungen ist notwendig, um die verschiedenen Auspraegungen und Varianten von Cloud-Angeboten beurteilen zu koennen.
[38] Zoya Durdik and Ralf Reussner. On the Appropriate Rationale for Using Design Patterns and Pattern Documentation. In Proceedings of the 9th ACM SIGSOFT International Conference on the Quality of Software Architectures (QoSA 2013), 2013. [ bib ]
[39] Lucia Happe, Barbora Buhnova, and Ralf Reussner. Stateful component-based performance models. Software & Systems Modeling, 13(4):1319-1343, 2013, Springer-Verlag. [ bib | DOI | http | Abstract ]
The accuracy of performance-prediction models is crucial for widespread adoption of performance prediction in industry. One of the essential accuracy-influencing aspects of software systems is the dependence of system behaviour on a configuration, context or history related state of the system, typically reflected with a (persistent) system attribute. Even in the domain of component-based software engineering, the presence of state-reflecting attributes (the so-called internal states) is a natural ingredient of the systems, implying the existence of stateful services, stateful components and stateful systems as such. Currently, there is no consensus on the definition or method to include state-related information in component-based prediction models. Besides the task to identify and localise different types of stateful information across component-based software architecture, the issue is to balance the expressiveness and complexity of prediction models via an effective abstraction of state modelling. In this paper, we identify and classify stateful information in component-based software systems, study the performance impact of the individual state categories, and discuss the costs of their modelling in terms of the increased model size. The observations are formulated into a set of heuristics-guiding software engineers in state modelling. Finally, practical effect of state modelling on software performance is evaluated on a real-world case study, the SPECjms2007 Benchmark. The observed deviation of measurements and predictions was significantly decreased by more precise models of stateful dependencies.
[40] Michael Hauck, Michael Kuperberg, Nikolaus Huber, and Ralf Reussner. Deriving performance-relevant infrastructure properties through model-based experiments with ginpex. Software & Systems Modeling, pages 1-21, 2013, Springer-Verlag. [ bib | DOI | http | Abstract ]
To predict the performance of an application, it is crucial to consider the performance of the underlying infrastructure. Thus, to yield accurate prediction results, performance-relevant properties and behaviour of the infrastructure have to be integrated into performance models. However, capturing these properties is a cumbersome and error-prone task, as it requires carefully engineered measurements and experiments. Existing approaches for creating infrastructure performance models require manual coding of these experiments, or ignore the detailed properties in the models. The contribution of this paper is the Ginpex approach, which introduces goal-oriented and model-based specification and generation of executable performance experiments for automatically detecting and quantifying performance-relevant infrastructure properties. Ginpex provides a metamodel for experiment specification and comes with predefined experiment templates that provide automated experiment execution on the target platform and also automate the evaluation of the experiment results. We evaluate Ginpex using three case studies, where experiments are executed to quantify various infrastructure properties.
[41] Michael Hauck. Automated Experiments for Deriving Performance-relevant Properties of Software Execution Environments. PhD thesis, Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany, 2013. [ bib | http | http | Abstract ]
The software execution environment can play a crucial role when analyzing the performance of a software system. However, detecting execution environment properties and integrating such properties into performance analyses is a manual, error-prone task that requires expert knowledge on the execution environment. In this thesis, a novel approach for detecting performance-relevant properties of the software execution environment is presented. These properties are automatically detected using predefined experiments and integrated into performance prediction tools. Based on a metamodel for experiment specification, the approach is used to design experiments for detecting different CPU, OS scheduling, and virtualization properties. This thesis also includes different case studies which demonstrate the applicability of the approach.
[42] Christoph Heger. Systematic guidance in solving performance and scalability problems. In WCOP '13: Proceedings of the 18th international doctoral symposium on Components and Architecture, Vancouver, Canada, 2013. ACM, New York, NY, USA. 2013. [ bib | .pdf ]
[43] Christoph Heger, Jens Happe, and Roozbeh Farahbod. Automated root cause isolation of performance regressions during software development. In Proceedings of the 4th ACM/SPEC International Conference on Performance Engineering, Prague, Czech Republic, 2013, ICPE '13, pages 27-38. ACM, New York, NY, USA. 2013, Best Paper Award nominee. [ bib | DOI | http | .pdf ]
[44] Jörg Henss, Philipp Merkle, and Ralf H. Reussner. Poster abstract: The OMPCM simulator for model-based software performance prediction. In Proceedings of the 6th International ICST Conference on Simulation Tools and Techniques, Cannes, France, 2013. [ bib ]
[45] Georg Hinkel. An approach to maintainable model transformations using an internal DSL. Master's thesis, Karlsruhe Institute of Technology, 2013. [ bib | .pdf | Abstract ]
In recent years, model-driven software development (MDSD) has gained popularity among both industry and academia. MDSD aims to generate traditional software artifacts from models. This generation process is realized in multiple steps. Thus, before being transformed to software artifacts, models are transformed into models of other metamodels. Such model transformation is supported by dedicated model transformation languages. In many cases, these are entirely new languages (external domain-specific languages, DSLs) for a more clear and concise representation of abstractions. On the other hand, the tool support is rather poor and the transformation developers hardly know the transformation language. A possible solution for this problem is to extend the programming language typically used by developers (mostly Java or C#) with the required abstractions. This can be achieved with an internal DSL. Thus, concepts of the host language can easily be reused while still creating the necessary abstractions to ease development of model transformations. Furthermore, the tool support for the host language can be reused for the DSL. In this master thesis, NMF Transformations is presented, a framework and internal DSL for C#. It equips developers with the ability to specify model transformations in languages like C# without having to give up abstractions known from model transformation standards. Transformation developers get the full tool support provided for C#. The applicability of NMF Transformations as well as the impact of NMF Transformations to quality attributes of model transformations is evaluated in three case studies. Two of them come from the Transformation Tool Contests 2013 (TTC). With these case studies, NMF Transformations is compared with other approaches to model transformation. A further case study comes from ABB Corporate Research to demonstrate the advantages of NMF Transformations in an industrial scenario where aspects like testability gain special importance.
[46] Georg Hinkel, Thomas Goldschmidt, and Lucia Happe. An NMF solution for the Petri Nets to State Charts case study at the TTC 2013. EPTCS, 135:95-100, 2013. [ bib | DOI | .pdf | Abstract ]
Software systems are getting more and more complex. Model-driven engineering (MDE) offers ways to handle such increased complexity by lifting development to a higher level of abstraction. A key part in MDE are transformations that transform any given model into another. These transformations are used to generate all kinds of software artifacts from models. However, there is little consensus about the transformation tools. Thus, the Transformation Tool Contest (TTC) 2013 aims to compare different transformation engines. This is achieved through three different cases that have to be tackled. One of these cases is the Petri Net to State Chart case. A solution has to transform a Petri Net to a State Chart and has to derive a hierarchical structure within the State Chart. This paper presents the solution for this case using NMF Transformations as transformation engine.
[47] Georg Hinkel, Thomas Goldschmidt, and Lucia Happe. An NMF solution for the Flowgraphs case at the TTC 2013. EPTCS, 135:37-42, 2013. [ bib | DOI | .pdf | Abstract ]
Software systems are getting more and more complex. Model-driven engineering (MDE) offers ways to handle such increased complexity by lifting development to a higher level of abstraction. A key part in MDE are transformations that transform any given model into another. These transformations are used to generate all kinds of software artifacts from models. However, there is little consensus about the transformation tools. Thus, the Transformation Tool Contest (TTC) 2013 aims to compare different transformation engines. This is achieved through three different cases that have to be tackled. One of these cases is the Flowgraphs case. A solution has to transform a Java code model into a simplified version and has to derive control and data flow. This paper presents the solution for this case using NMF Transformations as transformation engine.
[48] Martin Küster. Architecture-centric modeling of design decisions for validation and traceability. In Proceedings of the 7th European Conference on Software Architecture (ECSA '13), Khalil Drira, editor, Montpellier, France, 2013, volume 7957 of Lecture Notes in Computer Science, pages 184-191. Springer Berlin Heidelberg. 2013. [ bib | DOI | http ]
[49] Martin Küster, Benjamin Klatt, Eike Kohnert, Steffen Brandt, and Johannes Tysiak. Apps aus Kästchen und Linien - Modellgetriebene Multi-Plattformentwicklung mobiler Anwendungen. OBJEKTspektrum, (1), 2013. [ bib | http ]
[50] Marco Konersmann, Zoya Durdik, Michael Goedicke, and Ralf Reussner. Towards Architecture-Centric Evolution of Long-Living Systems (The ADVERT Approach). In Proceedings of the 9th ACM SIGSOFT International Conference on the Quality of Software Architectures (QoSA 2013), 2013. [ bib ]
[51] Anne Koziolek, Danilo Ardagna, and Raffaela Mirandola. Hybrid multi-attribute QoS optimization in component based software systems. Journal of Systems and Software, 86(10):2542 - 2558, 2013, Elsevier. Special Issue on Quality Optimization of Software Architecture and Design Specifications. [ bib | DOI | http | .pdf | Abstract ]
Design decisions for complex, component-based systems impact multiple quality of service (QoS) properties. Often, means to improve one quality property deteriorate another one. In this scenario, selecting a good solution with respect to a single quality attribute can lead to unacceptable results with respect to the other quality attributes. A promising way to deal with this problem is to exploit multi-objective optimization where the objectives represent different quality attributes. The aim of these techniques is to devise a set of solutions, each of which assures an optimal trade-off between the conflicting qualities. Our previous work proposed a combined use of analytical optimization techniques and evolutionary algorithms to efficiently identify an optimal set of design alternatives with respect to performance and costs. This paper extends this approach to more QoS properties by providing analytical algorithms for availability-cost optimization and three-dimensional availability-performance-cost optimization. We demonstrate the use of this approach on a case study, showing that the analytical step provides a better-than-random starting population for the evolutionary optimization, which lead to a speed-up of 28% in the availability-cost case.
[52] Anne Koziolek, Alberto Avritzer, Sindhu Suresh, Daniel Sadoc Menasche, Kishor Trivedi, and Lucia Happe. Design of distribution automation networks using survivability modeling and power flow equations. In Software Reliability Engineering (ISSRE), 2013 IEEE 24th International Symposium on, 2013, pages 41-50. [ bib | DOI | .pdf ]
[53] Anne Koziolek, Robert L. Nord, and Philippe Kruchten, editors. QoSA '13: Proceedings of the 9th International ACM Sigsoft Conference on Quality of Software Architectures. ACM, New York, NY, USA, New York, NY, USA, 2013. 594131. [ bib | http ]
[54] Anne Koziolek. Automated Improvement of Software Architecture Models for Performance and Other Quality Attributes, volume 7 of The Karlsruhe Series on Software Design and Quality. KIT Scientific Publishing, Karlsruhe, 2013. [ bib | DOI | http | http | Abstract ]
Quality attributes, such as performance or reliability, are crucial for the success of a software system and largely influenced by the software architecture. Their quantitative prediction supports systematic, goal-oriented software design and forms a base of an engineering approach to software design. This thesis proposes a method and tool to automatically improve component-based software architecture (CBA) models based on such quantitative quality prediction techniques.
[55] Max E. Kramer, Jacques Klein, Jim R. H. Steel, Brice Morin, Jörg Kienzle, Olivier Barais, and Jean-Marc Jézéquel. Achieving practical genericity in model weaving through extensibility. In Theory and Practice of Model Transformations, Keith Duddy and Gerti Kappel, editors, volume 7909 of Lecture Notes in Computer Science, pages 108-124. Springer Berlin Heidelberg, 2013. [ bib | DOI | http | .pdf ]
[56] Max E. Kramer, Erik Burger, and Michael Langhammer. View-centric engineering with synchronized heterogeneous models. In Proceedings of the 1st Workshop on View-Based, Aspect-Oriented and Orthographic Software Modelling, Montpellier, France, 2013, VAO '13, pages 5:1-5:6. ACM, New York, NY, USA. 2013. [ bib | DOI | http | .pdf ]
[57] Michael Langhammer. Co-evolution of component-based architecture-model and object-oriented source code. In Proceedings of the 18th international doctoral symposium on Components and architecture, 2013, pages 37-42. ACM. [ bib ]
[58] Michael Langhammer, Sebastian Lehrig, and Max E. Kramer. Reuse and configuration for code generating architectural refinement transformations. In Proceedings of the 1st Workshop on View-Based, Aspect-Oriented and Orthographic Software Modelling, Montpellier, France, 2013, VAO '13, pages 6:1-6:5. ACM, New York, NY, USA. 2013. [ bib | DOI | http | .pdf ]
[59] Philipp Merkle. Guiding transaction design through architecture-level performance and data consistency prediction. In Software Engineering 2013-Workshopband, 2013, volume P-215. Doctoral Symposium. [ bib | .pdf ]
[60] Aleksandar Milenkoski, Bryan D. Payne, Nuno Antunes, Marco Vieira, and Samuel Kounev. HInjector: Injecting Hypercall Attacks for Evaluating VMI-based Intrusion Detection Systems (poster paper). In The 2013 Annual Computer Security Applications Conference (ACSAC 2013), New Orleans, Louisiana, USA, 2013. Applied Computer Security Associates (ACSA), Maryland, USA. 2013. [ bib | .pdf ]
[61] Phu H. Nguyen, Jacques Klein, Max E. Kramer, and Yves Le Traon. A systematic review of model-driven security. In Proceedings of the 2013 20th Asia-Pacific Software Engineering Conference, 2013, volume 1, pages 432-441. IEEE Computer Society. 2013. [ bib | DOI | Abstract ]
To face continuously growing security threats and requirements, sound methodologies for constructing secure systems are required. In this context, Model-Driven Security (MDS) has emerged since more than a decade ago as a specialized Model-Driven Engineering approach for supporting the development of secure systems. MDS aims at improving the productivity of the development process and quality of the resulting secure systems, with models as the main artifact. This paper presents how we systematically examined existing published work in MDS and its results. The systematic review process, which is based on a formally designed review protocol, allowed us to identify, classify, and evaluate different MDS approaches. To be more specific, from thousands of relevant papers found, a final set of the most relevant MDS publications has been identified, strictly selected, and reviewed. We present a taxonomy for MDS, which is used to synthesize data in order to classify and evaluate the selected MDS approaches. The results draw a wide picture of existing MDS research showing the current status of the key aspects in MDS as well as the identified most relevant MDS approaches. We discuss the main limitations of the existing MDS approaches and suggest some potential research directions based on these insights.
[62] Qais Noorshams, Kiana Rostami, Samuel Kounev, Petr Tůma, and Ralf Reussner. I/O Performance Modeling of Virtualized Storage Systems. In Proceedings of the IEEE 21st International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems, San Francisco, USA, 2013, MASCOTS '13, pages 121-130. Acceptance Rate (Full Paper): 44/163 = 27%. [ bib | DOI | http | .pdf ]
[63] Qais Noorshams, Dominik Bruhn, Samuel Kounev, and Ralf Reussner. Predictive Performance Modeling of Virtualized Storage Systems using Optimized Statistical Regression Techniques. In Proceedings of the ACM/SPEC International Conference on Performance Engineering, Prague, Czech Republic, 2013, ICPE '13, pages 283-294. ACM, New York, NY, USA. 2013. [ bib | DOI | http | .pdf ]
[64] Qais Noorshams, Andreas Rentschler, Samuel Kounev, and Ralf Reussner. A Generic Approach for Architecture-level Performance Modeling and Prediction of Virtualized Storage Systems. In Proceedings of the ACM/SPEC International Conference on Performance Engineering, Prague, Czech Republic, 2013, ICPE '13, pages 339-342. ACM, New York, NY, USA. 2013. [ bib | DOI | http | .pdf ]
[65] Qais Noorshams, Samuel Kounev, and Ralf Reussner. Experimental Evaluation of the Performance-Influencing Factors of Virtualized Storage Systems. In Computer Performance Engineering. 9th European Workshop, EPEW 2012, Munich, Germany, July 30, 2012, and 28th UK Workshop, UKPEW 2012, Edinburgh, UK, July 2, 2012, Revised Selected Papers, Mirco Tribastone and Stephen Gilmore, editors, volume 7587 of Lecture Notes in Computer Science, pages 63-79. Springer Berlin Heidelberg, 2013. [ bib | DOI | http | .pdf ]
[66] Christoph Rathfelder. Modelling Event-Based Interactions in Component-Based Architectures for Quantitative System Evaluation, volume 10 of The Karlsruhe Series on Software Design and Quality. KIT Scientific Publishing, Karlsruhe, Germany, 2013. [ bib | http | http ]
[67] Ralf Reussner, Steffen Becker, Anne Koziolek, and Heiko Koziolek. Perspectives on the Future of Software Engineering, chapter An Empirical Investigation of the Component-Based Performance Prediction Method Palladio, pages 191-207. Springer Berlin Heidelberg, 2013. [ bib | DOI | http | .pdf ]
[68] Robert Vaupel, Qais Noorshams, Samuel Kounev, and Ralf Reussner. Using Queuing Models for Large System Migration Scenarios - An Industrial Case Study with IBM System z. In Computer Performance Engineering. 10th European Workshop, EPEW 2013, Venice, Italy, September 16-17, 2013. Proceedings, Maria Simonetta Balsamo, William J. Knottenbelt, and Andrea Marin, editors, volume 8168 of Lecture Notes in Computer Science, pages 263-275. Springer Berlin Heidelberg, 2013. [ bib | DOI | http | .pdf ]
[69] Robert Vaupel. High Availability and Scalability of Mainframe Environments. KIT Scientific Publishing, Karlsruhe, 2013. [ bib | http | http | Abstract ]
Mainframe computers are the backbone of industrial and commercial computing, hosting the most relevant and critical data of businesses. One of the most important mainframe environments is IBM System z with the operating system z/OS. This book introduces mainframe technology of System z and z/OS with respect to high availability and scalability. It highlights their presence on different levels within the hardware and software stack to satisfy the needs for large IT organizations.
[70] Christian Vogel, Heiko Koziolek, Thomas Goldschmidt, and Erik Burger. Rapid Performance Modeling by Transforming Use Case Maps to Palladio Component Models. In Proceedings of the 4th ACM/SPEC International Conference on Performance Engineering, Prague, Czech Republic, 2013, ICPE '13, pages 101-112. ACM, New York, NY, USA. 2013. [ bib | DOI | http | .pdf ]
[71] Christian Weiss, Dennis Westermann, Christoph Heger, and Martin Moser. Systematic performance evaluation based on tailored benchmark applications. In Proceedings of the 4th ACM/SPEC International Conference on Performance Engineering, Prague, Czech Republic, 2013, ICPE '13, pages 411-420. ACM, New York, NY, USA. 2013, Industrial Track. [ bib | DOI | http | .pdf ]
[72] Alexander Wert. Performance problem diagnostics by systematic experimentation. In Proceedings of the 18th international doctoral symposium on Components and architecture, Vancouver, British Columbia, Canada, 2013, WCOP '13, pages 1-6. ACM, New York, NY, USA. 2013. [ bib | DOI | http ]
[73] Alexander Wert, Jens Happe, and Lucia Happe. Supporting swift reaction: automatically uncovering performance problems by systematic experiments. In Proceedings of the 2013 International Conference on Software Engineering, San Francisco, CA, USA, 2013, ICSE '13, pages 552-561. IEEE Press, Piscataway, NJ, USA. 2013. [ bib | http ]
[74] Dennis Westermann, Jens Happe, and Roozbeh Farahbod. An experiment specification language for goal-driven, automated performance evaluations. In Proc. of the ACM Symposium on Applied Computing, SAC 2013, 2013, page to appear. [ bib ]
[75] Rouven Krebs, Christof Momm, and Samuel Kounev. Metrics and Techniques for Quantifying Performance Isolation in Cloud Environments. Elsevier Science of Computer Programming Journal (SciCo), 2013, Elsevier B.V. In print. [ bib | .pdf ]
[76] Philipp Merkle. Predicting transaction quality for balanced data consistency and performance. In Proceedings of the 18th international doctoral symposium on Components and architecture, Vancouver, British Columbia, Canada, 2013, WCOP '13, pages 13-18. ACM, New York, NY, USA. 2013. [ bib | DOI | http ]
[77] Oliver Hummel and Dominic Seiffert. Bridging the gap between object-oriented development practices and software component reuse. In DReMer '13 - International Workshop on Designing Reusable Components and Measuring Reusability, 2013. [ bib ]
[78] Dominic Seiffert and Oliver Hummel. Improving the runtime-processing of test cases for component adaptation. In Safe and Secure Software Reuse, pages 81-96. Springer Berlin Heidelberg, 2013. [ bib ]
[79] Oliver Hummel and Stefan Burger. A pragmatic means for measuring the complexity of source code ensembles. In 4th International Workshop on Emerging Trends in Software Metrics (WeTSOM 2013), 2013. [ bib | .pdf ]
[80] Werner Janjic, Oliver Hummel, Marcus Schumacher, and Colin Atkinson. An unabridged source code dataset for research in software reuse. In Mining Software Repositories (MSR), 2013. [ bib | .pdf ]
[81] Stefan Burger, Oliver Hummel, and Matthias Heinisch. Airbus cabin software. Software, IEEE, 30(1):21-25, 2013, IEEE. [ bib ]
[82] Oliver Hummel and Werner Janjic. Test-driven reuse: Key to improving precision of search engines for software reuse. In Finding Source Code on the Web for Remix and Reuse, pages 227-250. Springer New York, 2013. [ bib ]
[83] Oliver Hummel, Colin Atkinson, and Marcus Schumacher. Artifact representation techniques for large-scale software search engines. In Finding Source Code on the Web for Remix and Reuse, pages 81-101. Springer New York, 2013. [ bib ]
[84] Rouven Krebs, Manuel Loesch, and Samuel Kounev. Performance Isolation Framework for Multi-Tenant Applications. In Proceedings of the 3rd IEEE International Conference on Cloud and Green Computing (CGC 2013), Karlsruhe, Germany, 2013. [ bib ]
[85] Seyed Vahid Mohammadi, Samuel Kounev, Adrián Juan-Verdejo, and Bholanathsingh Surajbali. Soft Reservations: Uncertainty-Aware Resource Reservations in IaaS Environments. In Proceedings of the 3rd International Symposium on Business Modeling and Software Design (BMSD 2013), Noordwijkerhout, The Netherlands, 2013. [ bib | .pdf ]
[86] Reiner Jung, Robert Heinrich, and Eric Schmieders. Model-driven instrumentation with kieker and palladio to forecast dynamic applications. In Symposium on Software Performance, 2013, volume 1083, pages 99-108. CEUR. 2013. [ bib | .pdf ]
[87] Oliver Hummel and Robert Heinrich. Towards automated software project planning extending palladio for the simulation of software processes. In Symposium on Software Performance, 2013. [ bib | .pdf ]
[88] Wilhelm Hasselbring, Robert Heinrich, Reiner Jung, Andreas Metzger, Klaus Pohl, Ralf Reussner, and Eric Schmieders. iObserve: Integrated observation and modeling techniques to support adaptation and evolution of software systems. Technical Report No. 1309, Kiel University, Kiel, Germany, Oktober 2013. [ bib | http ]
[89] Robert Heinrich and Barbara Paech. On the prediction of the mutual impact of business processes and enterprise information systems. In Software Enginerring, 2013. [ bib | .pdf ]

 TOP

Publications 2012

[1] Aleksandar Milenkoski and Samuel Kounev. Towards Benchmarking Intrusion Detection Systems for Virtualized Cloud Environments (extended abstract). In Proceedings of the 7th International Conference for Internet Technology and Secured Transactions (ICITST 2012), London, United Kingdom, December 2012, pages 562-563. IEEE, New York, USA. December 2012. [ bib | http | .pdf | Abstract ]
Many recent research works propose novel architectures of intrusion detection systems specifically designed to operate in virtualized environments. However, little attention has been given to the evaluation and benchmarking of such architectures with respect to their performance and dependability. In this paper, we present a research roadmap towards developing a framework for benchmarking intrusion detection systems for cloud environments in a scientifically rigorous and a representative manner.
[2] Henning Groenda. Path coverage criteria for palladio performance models. In Proceedings of the 38th EUROMICRO Conference on Software Engineering and Advanced Applications, September 2012, SEAA '12, pages 133-137. IEEE Computer Society, Washington, DC, USA. September 2012. [ bib | DOI ]
[3] Nikolaus Huber, André van Hoorn, Anne Koziolek, Fabian Brosig, and Samuel Kounev. S/T/A: Meta-Modeling Run-Time Adaptation in Component-Based System Architectures. In Proceedings of the 9th IEEE International Conference on e-Business Engineering (ICEBE 2012), Hangzhou, China, September 9-11, 2012, pages 70-77. IEEE Computer Society, Los Alamitos, CA, USA. September 2012, Acceptance Rate (Full Paper): 19.7% (26/132). [ bib | DOI | http | .pdf | Abstract ]
Modern virtualized system environments usually host diverse applications of different parties and aim at utilizing resources efficiently while ensuring that quality-of-service requirements are continuously satisfied. In such scenarios, complex adaptations to changes in the system environment are still largely performed manually by humans. Over the past decade, autonomic self-adaptation techniques aiming to minimize human intervention have become increasingly popular. However, given that adaptation processes are usually highly system specific, it is a challenge to abstract from system details enabling the reuse of adaptation strategies. In this paper, we propose a novel modeling language (meta-model) providing means to describe system adaptation processes at the system architecture level in a generic, human-understandable and reusable way. We apply our approach to three different realistic contexts (dynamic resource allocation, software architecture optimization, and run-time adaptation planning) showing how the gap between complex manual adaptations and their autonomous execution can be closed by using a holistic model-based approach.
[4] Dennis Westermann, Jens Happe, Rouven Krebs, and Roozbeh Farahbod. Automated inference of goal-oriented performance prediction functions. In Proceedings of the 27th IEEE/ACM International Conference On Automated Software Engineering (ASE 2012), Essen, Germany, September 3-7, 2012. [ bib ]
[5] Samuel Kounev, Kai Sachs, and Piotr Rygielski. SPEC Research Group Newsletter, vol. 1 no. 1, September 2012. Published by Standard Performance Evaluation Corporation (SPEC). [ bib | .html | .pdf ]
[6] Viliam Simko, Petr Hnetynka, Tomas Bures, and Frantisek Plasil. FOAM : A Lightweight Method for Verification of Use-Cases. In Proceedings of the 38th Euromicro Conference on Software Engineering and Advanced Applications (SEAA), Cesme, Izmir, Turkey, September 5-8, 2012, pages 228-232. IEEE/ACM. September 2012. [ bib | DOI | .pdf | Abstract ]
The advantage of textual use-cases is that they can be easily understood by stakeholders and domain experts. However, since use-cases typically rely on a natural language, they cannot be directly subject to a formal verification. In this paper, we present Formal Verification of Annotated Use-Case Models (FOAM) method which features simple user-definable annotations, inserted into a use-case to make its semantics more suitable for verification. Subsequently a model-checking tool verifies temporal invariants associated with the annotations. This way, FOAM allows for harnessing the benefits of model-checking while still keeping the use-cases understandable for non-experts.
[7] Christoph Rathfelder, Stefan Becker, Klaus Krogmann, and Ralf Reussner. Workload-aware system monitoring using performance predictions applied to a large-scale e-mail system. In Proceedings of the Joint 10th Working IEEE/IFIP Conference on Software Architecture (WICSA) & 6th European Conference on Software Architecture (ECSA), Helsinki, Finland, August 2012, pages 31-40. IEEE Computer Society, Washington, DC, USA. August 2012, Acceptance Rate (Full Paper): 19.8%. [ bib | DOI | http | .pdf ]
[8] Robert Vaupel. Routing Workloads Based on Relative Queue Lengths of Dispatchers. Patent No. 8245238, United States, August 2012. [ bib ]
[9] Fabian Brosig, Nikolaus Huber, and Samuel Kounev. Modeling Parameter and Context Dependencies in Online Architecture-Level Performance Models. In Proceedings of the 15th ACM SIGSOFT International Symposium on Component Based Software Engineering (CBSE 2012), June 26-28, 2012, Bertinoro, Italy, June 2012. Acceptance Rate (Full Paper): 28.5%. [ bib | http | .pdf | Abstract ]
Modern enterprise applications have to satisfy increasingly stringent Quality-of-Service requirements. To ensure that a system meets its performance requirements, the ability to predict its performance under different configurations and workloads is essential. Architecture-level performance models describe performance-relevant aspects of software architectures and execution environments allowing to evaluate different usage profiles as well as system deployment and configuration options. However, building performance models manually requires a lot of time and effort. In this paper, we present a novel automated method for the extraction of architecture-level performance models of distributed component-based systems, based on monitoring data collected at run-time. The method is validated in a case study with the industry-standard SPECjEnterprise2010 Enterprise Java benchmark, a representative software system executed in a realistic environment. The obtained performance predictions match the measurements on the real system within an error margin of mostly 10-20 percent.
[10] Henning Groenda. Improving performance predictions by accounting for the accuracy of composed performance models. In Proceedings of the 8th international ACM SIGSOFT conference on Quality of Software Architectures (QoSA), Bertinoro, Italy, June 2012, QoSA '12, pages 111-116. ACM, New York, NY, USA. June 2012. [ bib | DOI | http ]
[11] Nikolaus Huber, Fabian Brosig, and Samuel Kounev. Modeling Dynamic Virtualized Resource Landscapes. In Proceedings of the 8th ACM SIGSOFT International Conference on the Quality of Software Architectures (QoSA 2012), Bertinoro, Italy, June 25-28, 2012, pages 81-90. ACM, New York, NY, USA. June 2012, Acceptance Rate (Full Paper): 25.6%. [ bib | DOI | http | .pdf | Abstract ]
Modern data centers are subject to an increasing demand for flexibility. Increased flexibility and dynamics, however, also result in a higher system complexity. This complexity carries on to run-time resource management for Quality-of-Service (QoS) enforcement, rendering design-time approaches for QoS assurance inadequate. In this paper, we present a set of novel meta-models that can be used to describe the resource landscape, the architecture and resource layers of dynamic virtualized data center infrastructures, as well as their run-time adaptation and resource management aspects. With these meta-models we introduce new modeling concepts to improve model-based run-time QoS assurance. We evaluate our meta-models by modeling a representative virtualized service infrastructure and using these model instances for run-time resource allocation. The results demonstrate the benefits of the new meta-models and show how they can be used to improve model-based system adaptation and run-time resource management in dynamic virtualized data centers.
[12] Benjamin Klatt and Martin Küster. Respecting Component Architecture to Migrate Product Copies to a Software Product Line. In Proceedings of the 17th International Doctoral Symposium on Components and Architecture (WCOP'12), June 2012. Bertinoro, Italy. Young Investigator / Best Paper Award. [ bib | .pdf ]
[13] Rouven Krebs, Christof Momm, and Samuel Kounev. Metrics and Techniques for Quantifying Performance Isolation in Cloud Environments. In Proceedings of the 8th ACM SIGSOFT International Conference on the Quality of Software Architectures (QoSA 2012), Barbora Buhnova and Antonio Vallecillo, editors, Bertinoro, Italy, June 25-28, 2012, pages 91-100. ACM Press, New York, USA. June 2012, Acceptance Rate (Full Paper): 25.6%. [ bib | http | .pdf ]
[14] Christian Prause and Zoya Durdik. Architectural Design and Documentation: Waste in Agile Development? In Proceedings of the International Conference on Software and Systems Process (ICSSP 2012) (co-located with ICSE 2012), June 2012. [ bib ]
[15] Simon Spinner, Samuel Kounev, and Philipp Meier. Stochastic Modeling and Analysis using QPME: Queueing Petri Net Modeling Environment v2.0. In Proceedings of the 33rd International Conference on Application and Theory of Petri Nets and Concurrency (Petri Nets 2012), Serge Haddad and Lucia Pomello, editors, Hamburg, Germany, June 27-29, 2012, volume 7347 of Lecture Notes in Computer Science (LNCS), pages 388-397. Springer-Verlag, Berlin, Heidelberg. June 2012. [ bib | http | .pdf | Abstract ]
Queueing Petri nets are a powerful formalism that can be exploited for modeling distributed systems and analyzing their performance and scalability. By combining the modeling power and expressiveness of queueing networks and stochastic Petri nets, queueing Petri nets provide a number of advantages. In this paper, we present our tool QPME (Queueing Petri net Modeling Environment) for modeling and analysis using queueing Petri nets. QPME provides an Eclipse-based editor for building queueing Petri net models and a powerful simulation engine for analyzing these models. The development of the tool started in 2003 and since then the tool has been distributed to more than 120 organizations worldwide.
[16] Martin Küster and Benjamin Klatt. Leveraging Design Decisions in Evolving Systems. In 14th Workshop Software-Reengineering (WSR 2012), May 02-04 2012. Bad-Honnef, Germany. [ bib | .html | .pdf ]
[17] Michael Faber and Jens Happe. Systematic adoption of genetic programming for deriving software performance curves. In Proceedings of 3rd ACM/SPEC Internatioanl Conference on Performance Engineering (ICPE 2012), Boston, USA, April 22-25, 2012, pages 33-44. ACM, New York, NY, USA. April 2012. [ bib | http | .pdf ]
[18] Henning Groenda. Protecting intellectual property by certified component quality descriptions. In Proceedings of the 2012 Ninth International Conference on Information Technology - New Generations, April 2012, ITNG '12, pages 287-292. IEEE Computer Society, Washington, DC, USA. April 2012. [ bib | DOI | http ]
[19] Samuel Kounev, Simon Spinner, and Philipp Meier. Introduction to Queueing Petri Nets: Modeling Formalism, Tool Support and Case Studies (tutorial paper). In Proceedings of the 3rd ACM/SPEC International Conference on Performance Engineering (ICPE 2012), Boston, USA, April 22-25, 2012, pages 9-18. ACM,SPEC, ACM, New York, NY, USA. April 2012. [ bib | slides | http | .pdf ]
[20] Rouven Krebs, Christof Momm, and Samuel Kounev. Architectural Concerns in Multi-Tenant SaaS Applications (short paper). In Proceedings of the 2nd International Conference on Cloud Computing and Services Science (CLOSER 2012), Setubal, Portugal, April 18-21, 2012. SciTePress. April 2012. [ bib | .pdf ]
[21] Benjamin Klatt, Zoya Durdik, Klaus Krogmann, Heiko Koziolek, Johannes Stammel, and Roland Weiss. Identify Impacts of Evolving Third Party Components on Long-Living Software Systems. In Proceedings of the 16th Conference on Software Maintenance and Reengineering (CSMR'12), March 2012, pages 461-464. Szeged, Hungary. [ bib | DOI | .pdf | Abstract ]
Integrating 3rd party components in software systems provides promising advantages but also risks due to disconnected evolution cycles. Deciding whether to migrate to a newer version of a 3rd party component integrated into self-implemented code or to switch to a different one is challenging. Dedicated evolution support for 3rd party component scenarios is hence required. Existing approaches do not account for open source components which allow accessing and analyzing their source code and project information. The approach presented in this paper combines analyses for code dependency, code quality, and bug tracker information for a holistic view on the evolution with 3rd party components. We applied the approach in a case study on a communication middleware component for industrial devices used at ABB. We identified 7 methods potentially impacted by changes of 3rd party components despite the absence of interface changes. We further identified self-implemented code that does not need any manual investigation after the 3rd party component evolution as well as a positive trend of code and bug tracker issues.
[22] Benjamin Klatt and Klaus Krogmann. Model-Driven Product Consolidation into Software Product Lines. Softwaretechnik-Trends, 32(2):13-14, March 2012, Köllen Druck & Verlag GmbH, Bamberg, Germany. [ bib | http | .pdf ]
[23] Robert Vaupel. Routing Workloads and Method Thereof. Patent No. 4959845, Japan, March 2012. [ bib ]
[24] Jochen Zimmermann, Martin Küster, Oliver Bringmann, and Wolfgang Rosenstiel. Model-based generation of a fast and accurate virtual execution platform for software-intensive real-time embedded systems. In 17th Workshop on Synthesis and System Integration of Mixed Information Technologies (SASIMI), March 2012. Beppu, Japan. [ bib ]
[25] Kai Sachs, Samuel Kounev, and Alejandro Buchmann. Performance modeling and analysis of message-oriented event-driven systems. Journal of Software and Systems Modeling (SoSyM), pages 1-25, February 2012, Springer-Verlag. [ bib | DOI | http | .pdf ]
[26] Eya Ben Charrada, Anne Koziolek, and Martin Glinz. Identifying outdated requirements based on source code changes. In Proceedings of the 20th IEEE International Requirements Engineering Conference (RE 2012), 2012, pages 61 -70. [ bib | DOI | http | Abstract ]
Keeping requirements specifications up-to-date when systems evolve is a manual and expensive task. Software engineers have to go through the whole requirements document and look for the requirements that are affected by a change. Consequently, engineers usually apply changes to the implementation directly and leave requirements unchanged. In this paper, we propose an approach for automatically detecting outdated requirements based on changes in the code. Our approach first identifies the changes in the code that are likely to affect requirements. Then it extracts a set of keywords describing the changes. These keywords are traced to the requirements specification, using an existing automated traceability tool, to identify affected requirements. Automatically identifying outdated requirements reduces the effort and time needed for the maintenance of requirements specifications significantly and thus helps preserve the knowledge contained in them. We evaluated our approach in a case study where we analyzed two consecutive source code versions and were able to detect 12 requirements-related changes out of 14 with a precision of 79%. Then we traced a set of keywords we extracted from these changes to the requirements specification. In comparison to simply tracing changed classes to requirements, we got better results in most cases.
[27] Stefanie Betz, Erik Burger, Alexander Eckert, Andreas Oberweis, Ralf Reussner, and Ralf Trunko. An approach for integrated lifecycle management for business processes and business software. In Aligning Enterprise, System, and Software Architectures, Ivan Mistrík, Antony Tang, Rami Bahsoon, and Judith A. Stafford, editors. IGI Global, 2012. [ bib ]
[28] Franz Brosch. Integrated Software Architecture-Based Reliability Prediction for IT Systems, volume 9 of The Karlsruhe Series on Software Design and Quality. KIT Scientific Publishing, Karlsruhe, 2012. [ bib ]
[29] Franz Brosch. Integrated Software Architecture-Based Reliability Prediction for IT Systems. PhD thesis, Institut für Programmstrukturen und Datenorganisation (IPD), Karlsruher Institut für Technologie, Karlsruhe, Germany, 2012. [ bib | http | Abstract ]
With the increasing importance of reliability in business and industrial IT systems, new techniques for architecture-based software reliability prediction are becoming an integral part of the development process. This dissertation thesis introduces a novel reliability modelling and prediction technique that considers the software architecture with its component structure, control and data flow, recovery mechanisms, its deployment to distributed hardware resources and the system's usage profile.
[30] Franz Brosch, Heiko Koziolek, Barbora Buhnova, and Ralf Reussner. Architecture-based reliability prediction with the palladio component model. IEEE Transactions on Software Engineering, 38(6):1319-1339, Nov 2012, IEEE Computer Society. [ bib | DOI | Abstract ]
With the increasing importance of reliability in business and industrial software systems, new techniques of architecture-based reliability engineering are becoming an integral part of the development process. These techniques can assist system architects in evaluating the reliability impact of their design decisions. Architecture-based reliability engineering is only effective if the involved reliability models reflect the interaction and usage of software components and their deployment to potentially unreliable hardware. However, existing approaches either neglect individual impact factors on reliability or hard-code them into formal models, which limits their applicability in component-based development processes. This paper introduces a reliability modelling and prediction technique that considers the relevant architectural factors of software systems by explicitly modelling the system usage profile and execution environment and automatically deriving component usage profiles. The technique offers a UML-like modelling notation, whose models are automatically transformed into a formal analytical model. Our work builds upon the Palladio Component Model, employing novel techniques of information propagation and reliability assessment. We validate our technique with sensitivity analyses and simulation in two case studies. The case studies demonstrate effective support of usage profile analysis and architectural configuration ranking, together with the employment of reliability-improving architecture tactics.
[31] Zoya Durdik, Benjamin Klatt, Heiko Koziolek, Klaus Krogmann, Johannes Stammel, and Roland Weiss. Sustainability guidelines for long-living software systems. In Proceedings of the 28th IEEE International Conference on Software Maintenance (ICSM), 2012. Trento, Italy. [ bib | http ]
[32] Zoya Durdik, Klaus Krogmann, and Felix Schad. Towards a generic approach for meta-model- and domain- independent model variability. Karlsruhe Reports in Informatics 2012,5, ISSN: 2190-4782, Karlsruhe, Germany, 2012. [ bib | http | Abstract ]
Variability originates from product line engineering and is an important part of today's software development. However, existing approaches mostly concentrate only on the variability in software product lines, and are usually not universal enough to consider variability in other development activities (e.g., modelling and hardware). Additionally, the complexity of variability in software is generally hard to capture and to handle. We propose a generic model-based solution which can generally handle variability on Ecore-based meta-models. The approach includes a formal description for variability, a way to express the configuration of variants, a compact DSL to describe the semantics of model variability and model-to-model transformations, and an engine which transforms input models into models with injected variability. This work provides a complete and domain-independent solution for variability handling. The applicability of the proposed approach will be validated in two case studies, considering the two independent domains of mobile platforms and architecture knowledge reuse.
[33] Zoya Durdik and Ralf Reussner. Position Paper: Approach for Architectural Design and Modelling with Documented Design Decisions (ADMD3). In Proceedings of the 8th ACM SIGSOFT International Conference on the Quality of Software Architectures (QoSA 2012), Bertinoro, Italy, 2012. [ bib ]
[34] Daniel Funke, Fabian Brosig, and Michael Faber. Towards Truthful Resource Reservation in Cloud Computing. In Proceedings of the 6th International ICST Conference on Performance Evaluation Methodologies and Tools (ValueTools 2012), Cargèse, France, 2012. [ bib | .pdf | Abstract ]
Prudent capacity planning to meet their clients future computational needs is one of the major issues cloud computing providers face today. By offering resource reservations in advance, providers gain insight into the projected demand of their customers and can act accordingly. However, customers need to be given an incentive, e.g. discounts granted, to commit early to a provider and to honestly, i.e., truthfully reserve their predicted future resource requirements. Customers may reserve capacity deviating from their truly predicted demand, in order to exploit the mechanism for their own benefit, thereby causing futile costs for the provider. In this paper we prove, using a game theoretic approach, that truthful reservation is the best, i.e., dominant strategy for customers if they are capable to make precise forecasts of their demands and that deviations from truth-telling can be profitable for customers if their demand forecasts are uncertain.
[35] Katja Gilly, Fabian Brosig, Ramon Nou, Samuel Kounev, and Carlos Juiz. Online prediction: Four case studies. In Resilience Assessment and Evaluation of Computing Systems, K. Wolter, A. Avritzer, M. Vieira, and A. van Moorsel, editors, XVIII. Springer-Verlag, Berlin, Heidelberg, 2012. ISBN: 978-3-642-29031-2. [ bib | http | .pdf | Abstract ]
Current computing systems are becoming increasingly complex in nature and exhibit large variations in workloads. These changing environments create challenges to the design of systems that can adapt themselves while maintaining desired Quality of Service (QoS), security, dependability, availability and other non-functional requirements. The next generation of resilient systems will be highly distributed, component-based and service-oriented. They will need to operate in unattended mode and possibly in hostile environments, will be composed of a large number of interchangeable components discoverable at run-time, and will have to run on a multitude of unknown and heterogeneous hardware and network platforms. These computer systems will adapt themselves to cope with changes in the operating conditions and to meet the service-level agreements with a minimum of resources. Changes in operating conditions include hardware and software failures, load variation and variations in user interaction with the system, including security attacks and overwhelming situations. This self adaptation of next resilient systems can be achieved by first online predicting how these situations would be by observation of the current environment. This chapter focuses on the use of online predicting methods, techniques and tools for resilient systems. Thus, we survey online QoS adaptive models in several environments as grid environments, service-oriented architectures and ambient intelligence using different approaches based on queueing networks, model checking, ontology engineering among others.
[36] Thomas Goldschmidt, Steffen Becker, and Erik Burger. Towards a tool-oriented taxonomy of view-based modelling. In Proceedings of the Modellierung 2012, Elmar J. Sinz and Andy Schürr, editors, Bamberg, 2012, volume P-201 of GI-Edition - Lecture Notes in Informatics (LNI), pages 59-74. Gesellschaft für Informatik e.V. (GI), Bonn, Germany. 2012. [ bib | .pdf ]
[37] Thijmen de Gooijer, Anton Jansen, Heiko Koziolek, and Anne Koziolek. An industrial case study of performance and cost design space exploration. In Proceedings of the 3rd ACM/SPEC International Conference on Performance Engineering, Lizy Kurian John and Diwakar Krishnamurthy, editors, Boston, Massachusetts, USA, 2012, ICPE '12, pages 205-216. ACM, New York, NY, USA. 2012, ICPE Best Industry-Related Paper Award. [ bib | DOI | http | .pdf ]
[38] Daniel Dominguez Gouvêa, Cyro Muniz, Gilson Pinto, Alberto Avritzer, Rosa Maria Meri Leão, Edmundo de Souza e Silva, Morganna Carmem Diniz, Luca Berardinelli, Julius C. B. Leite, Daniel Mossé, Yuanfang Cai, Michael Dalton, Lucia Happe, and Anne Koziolek. Experience with model-based performance, reliability and adaptability assessment of a complex industrial architecture. Journal of Software and Systems Modeling, pages 1-23, 2012, Springer-Verlag. Special Issue on Performance Modeling. [ bib | DOI | .pdf | Abstract ]
In this paper, we report on our experience with the application of validated models to assess performance, reliability, and adaptability of a complex mission critical system that is being developed to dynamically monitor and control the position of an oil-drilling platform. We present real-time modeling results that show that all tasks are schedulable. We performed stochastic analysis of the distribution of task execution time as a function of the number of system interfaces. We report on the variability of task execution times for the expected system configurations. In addition, we have executed a system library for an important task inside the performance model simulator. We report on the measured algorithm convergence as a function of the number of vessel thrusters. We have also studied the system architecture adaptability by comparing the documented system architecture and the implemented source code. We report on the adaptability findings and the recommendations we were able to provide to the system's architect. Finally, we have developed models of hardware and software reliability. We report on hardware and software reliability results based on the evaluation of the system architecture.
[39] Christoph Heger. Automatische Problemdiagnose in Performance-Unit-Tests. Master's thesis, Karlsruhe Institute of Technology (KIT), Germany, 2012. [ bib | .pdf ]
[40] Nikolas Roman Herbst. Workload Classification and Forecasting. Diploma Thesis, Karlsruhe Institute of Technology (KIT), Am Fasanengarten 5, 76131 Karlsruhe, Germany, 2012. Forschungszentrum Informatik (FZI) Prize "Best Diploma Thesis". [ bib | .pdf | Abstract ]
Virtualization technologies enable dynamic allocation of computing resources to execution environments at run-time. To exploit optimisation potential that comes with these degrees of freedom, forecasts of the arriving work's intensity are valuable information, to continuously ensure a defined quality of service (QoS) definition and at the same time to improve the efficiency of the resource utilisation. Time series analysis offers a broad spectrum of methods for calculation of forecasts based on periodically monitored values. Related work in the field of proactive resource provisioning mostly concentrate on single methods of the time series analysis and their individual optimisation potential. This way, usable forecast results are achieved only in certain situations. In this thesis, established methods of the time series analysis are surveyed and grouped concerning their strengths and weaknesses. A dynamic approach is presented that selects based on a decision tree and direct feedback cycles, capturing the forecast accuracy, the suitable method for a given situation. The user needs to provide only his general forecast objectives. An implementation of the introduced theoretical approach is presented that continuously provides forecasts of the arriving work's intensity in configurable intervals and with controllable computational overhead during run-time. Based on real-world intensity traces, a number of different experiments and a case study is conducted. The results show, that by use of the implementation the relative error of the forecast points in relation to the arriving observations is reduced by 63% in average compared to the results of a statically selected, sophisticated method. In a case study, between 52% and 70% of the violations of a given service level agreement are prevented by applying proactive resource provisioning based on the forecast results of the introduced implementation.
[41] Nikolaus Huber, Fabian Brosig, N. Dingle, K. Joshi, and Samuel Kounev. Providing Dependability and Performance in the Cloud: Case Studies. In Resilience Assessment and Evaluation of Computing Systems, K. Wolter, A. Avritzer, M. Vieira, and A. van Moorsel, editors, XVIII. Springer-Verlag, Berlin, Heidelberg, 2012. ISBN: 978-3-642-29031-2. [ bib | http | .pdf ]
[42] Nikolaus Huber, Marcel von Quast, Fabian Brosig, Michael Hauck, and Samuel Kounev. A Method for Experimental Analysis and Modeling of Virtualization Performance Overhead. In Cloud Computing and Services Science, Ivan Ivanov, Marten van Sinderen, and Boris Shishkov, editors, Service Science: Research and Innovations in the Service Economy, pages 353-370. Springer, New York, 2012. [ bib | DOI | http | .pdf ]
[43] Martin Küster and Mircea Trifu. A case study on co-evolution of software artifacts using integrated views. In Proceedings of the WICSA/ECSA 2012 Companion Volume, Helsinki, Finland, 2012, WICSA/ECSA '12, pages 124-131. ACM, New York, NY, USA. 2012. [ bib | DOI | http ]
[44] Martin Küster, Alexander Viehl, Andreas Burger, Oliver Bringmann, and Wolfgang Rosenstiel. Meta-Modelling the SystemC Standard for Component-based Embedded System Design. In 1st International Workshop on Metamodelling and Code Generation for Embedded Systems, 2012. [ bib ]
[45] Benjamin Klatt and Steffen Becker. Architekturen 2012: Industrie und Wissenschaft treffen sich. OBJEKTspektrum, 6(6), 2012, Sigs Datacom. [ bib | http ]
[46] Jacques Klein, Max E. Kramer, Jim R. H. Steel, Brice Morin, Jörg Kienzle, Olivier Barais, and Jean-Marc Jézéquel. On the formalisation of geko: a generic aspect models weaver. Technical report, University of Luxembourg, SnT, 2012. [ bib | http | .pdf | Abstract ]
This technical report presents the formalisation of the composition operator of GeKo, a Generic Aspect Models Weaver
[47] Samuel Kounev, Nikolaus Huber, Simon Spinner, and Fabian Brosig. Model-based techniques for performance engineering of business information systems. In Business Modeling and Software Design, Boris Shishkov, editor, volume 0109 of Lecture Notes in Business Information Processing (LNBIP), pages 19-37. Springer-Verlag, Berlin, Heidelberg, 2012. [ bib | http | .pdf | Abstract ]
With the increasing adoption of virtualization and the transition towards Cloud Computing platforms, modern business information systems are becoming increasingly complex and dynamic. This raises the challenge of guaranteeing system performance and scalability while at the same time ensuring efficient resource usage. In this paper, we present a historical perspective on the evolution of model-based performance engineering techniques for business information systems focusing on the major developments over the past several decades that have shaped the field. We survey the state-of-the-art on performance modeling and management approaches discussing the ongoing efforts in the community to increasingly bridge the gap between high-level business services and low level performance models. Finally, we wrap up with an outlook on the emergence of self-aware systems engineering as a new research area at the intersection of several computer science disciplines.
[48] Samuel Kounev, Philipp Reinecke, Fabian Brosig, Jeremy T. Bradley, Kaustubh Joshi, Vlastimil Babka, Anton Stefanek, and Stephen Gilmore. Providing dependability and resilience in the cloud: Challenges and opportunities. In Resilience Assessment and Evaluation of Computing Systems, K. Wolter, A. Avritzer, M. Vieira, and A. van Moorsel, editors, XVIII. Springer-Verlag, Berlin, Heidelberg, 2012. ISBN: 978-3-642-29031-2. [ bib | http | .pdf | Abstract ]
Cloud Computing is a novel paradigm for providing data center resources as on demand services in a pay-as-you-go manner. It promises significant cost savings by making it possible to consolidate workloads and share infrastructure resources among multiple applications resulting in higher cost- and energy-efficiency. However, these benefits come at the cost of increased system complexity and dynamicity posing new challenges in providing service dependability and resilience for applications running in a Cloud environment. At the same time, the virtualization of physical resources, inherent in Cloud Computing, provides new opportunities for novel dependability and quality-of-service management techniques that can potentially improve system resilience. In this chapter, we first discuss in detail the challenges and opportunities introduced by the Cloud Computing paradigm. We then provide a review of the state-of-the-art on dependability and resilience management in Cloud environments, and conclude with an overview of emerging research directions.
[49] Anne Koziolek. Research preview: Prioritizing quality requirements based on software architecture evaluation feedback. In Requirements Engineering: Foundation for Software Quality, Björn Regnell and Daniela Damian, editors, 2012, volume 7195 of Lecture Notes in Computer Science, pages 52-58. Springer Verlag Berlin Heidelberg. 2012. [ bib | DOI | http | .pdf | Abstract ]
[Context and motivation] Quality requirements are a main driver for architectural decisions of software systems. Although the need for iterative handling of requirements and architecture has been identified, current architecture design processes do not provide systematic, quantitative feedback for the prioritization and cost/benefit considerations for quality requirements. [Question/problem] Thus, in practice stakeholders still often state and prioritize quality requirements before knowing the software architecture, i.e. without knowledge about the quality dependencies, conflicts, incurred costs, and technical feasibility. However, as quality properties usually are cross-cutting architecture concerns, estimating the effects of design decisions is difficult. Thus, stakeholders cannot reliably know the appropriate required level of quality. [Principal ideas/results] In this research proposal, we suggest an approach to generate feedback from quantitative architecture evaluation to requirements engineering, in particular to requirements prioritization. We propose to use automated design space exploration techniques to generate information about available trade-offs. Final quality requirement prioritization is deferred until first feedback from architecture evaluation is available. [Contribution] In this paper, we present the process model of our approach enabling feedback to requirement prioritization and describe application scenarios and an example.
[50] Anne Koziolek. Architecture-driven quality requirements prioritization. In First International Workshop on the Twin Peaks of Requirements and Architecture (TwinPeaks 2012), 2012, pages 15-19. IEEE Computer Society. 2012. [ bib | DOI | http | .pdf | Abstract ]
Quality requirements are main drivers for architectural decisions of software systems. However, in practice they are often dismissed during development, because of initially unknown dependencies and consequences that complicate implementation. To decide for meaningful, feasible quality requirements and trade them off with functional requirements, tighter integration of software architecture evaluation and requirements prioritization is necessary. In this position paper, we propose a tool-supported method for architecture-driven feedback into requirements prioritization. Our method uses automated design space exploration based on quantitative quality evaluation of software architecture models. It helps requirements analysts and software architects to study the quality trade-offs of a software architecture, and use this information for requirements prioritization.
[51] Anne Koziolek, Lucia Happe, Alberto Avritzer, and Sindhu Suresh. A common analysis framework for smart distribution networks applied to survivability analysis of distribution automation. In Proceedings of the First International Workshop on Software Engineering Challenges for the Smart Grid (SE-SmartGrids 2012), 2012, pages 23-29. IEEE. 2012. [ bib | DOI | http | .pdf | Abstract ]
Smart distribution networks shall improve the efficiency and reliability of power distribution by intelligently managing the available power and requested load. Such intelligent power networks pose challenges for information and communication technology (ICT). Their design requires a holistic assessment of traditional power system topology and ICT architecture. Existing analysis approaches focus on analyzing the power networks components separately. For example, communication simulation provides failure data for communication links, while power analysis makes predictions about the stability of the traditional power grid. However, these insights are not combined to provide a basis for design decisions for future smart distribution networks. In this paper, we describe a common model-driven analysis framework for smart distribution networks based on the Common Information Model (CIM). This framework provides scalable analysis of large smart distribution networks by supporting analyses on different levels of abstraction. Furthermore, we apply our framework to holistic survivability analysis. We map the CIM on a survivability model to enable assessing design options with respect to the achieved survivability improvement. We demonstrate our approach by applying the mapping transformation in a case study based on a real distribution circuit. We conclude by evaluating the survivability impact of three investment options.
[52] Heiko Koziolek, Bastian Schlich, Steffen Becker, and Michael Hauck. Performance and reliability prediction for evolving service-oriented software systems. Empirical Software Engineering, pages 1-45, 2012, Springer US. [ bib | DOI | http ]
[53] Max E. Kramer, Zoya Durdik, Michael Hauck, Jörg Henss, Martin Küster, Philipp Merkle, and Andreas Rentschler. Extending the Palladio Component Model using Profiles and Stereotypes. In Palladio Days 2012 Proceedings (appeared as technical report), Steffen Becker, Jens Happe, Anne Koziolek, and Ralf Reussner, editors, 2012, Karlsruhe Reports in Informatics ; 2012,21, pages 7-15. KIT, Faculty of Informatics, Karlsruhe. 2012. [ bib | http | http | Abstract ]
Extending metamodels to account for new concerns has a major influence on existing instances, transformations and tools. To minimize the impact on existing artefacts, various techniques for extending a metamodel are available, for example, decorators and annotations. The Palladio Component Model (PCM) is a metamodel for predicting quality of component-based software architectures. It is continuously extended in order to be applicable in originally unexpected domains and settings. Nevertheless, a common extension approach for the PCM and for the tools built on top of it is still missing. In this paper, we propose a lightweight extension approach for the PCM based on profiles and stereotypes to close this gap. Our approach is going to reduce the development effort for new PCM extensions by handling both the definition and use of extensions in a generic way. Due to a strict separation of the PCM, its extension domains, and the connections in between, the approach also increases the interoperability of PCM extensions.
[54] Max E. Kramer, Jacques Klein, and Jim R.H. Steel. Building specifications as a domain-specific aspect language. In Proceedings of the seventh workshop on Domain-Specific Aspect Languages, Potsdam, Germany, 2012, DSAL '12, pages 29-32. ACM, New York, NY, USA. 2012. [ bib | DOI | http | .pdf | Abstract ]
In the construction industry an increasing number of buildings is designed using semantically rich three-dimensional models. In parallel, additional information called building specifications is specified in a text file using natural language. As not all details are present in the model these specifications have to be interpreted whenever costs are estimated or other analysis is performed. In this paper, we argue that building specifications are cross-cutting concerns. We also argue that domain experts shall be given the possibility to formulate buildings specifications using a domain-specific aspect language so that the corresponding details can automatically be integrated into the model. Moreover these domain-exports shall define the semantics of this language iteratively in order to have a flexible support for domain-specific abstractions absent in the building metamodel. This model enriching specification could improve various tasks that take details into account that were so far only covered by the specification text. It would also allow for earlier or even concurrent development of the building specification along with the model.
[55] Max E. Kramer. Generic and extensible model weaving and its application to building models. Master's thesis, Karlsruhe Institute of Technology (KIT), Germany, 2012. [ bib | .pdf ]
[56] Klaus Krogmann. Reconstruction of Software Component Architectures and Behaviour Models using Static and Dynamic Analysis, volume 4 of The Karlsruhe Series on Software Design and Quality. KIT Scientific Publishing, 2012. [ bib | DOI | http | Abstract ]
Model-based performance prediction systematically deals with the evaluation of software performance to avoid for example bottlenecks, estimate execution environment sizing, or identify scalability limitations for new usage scenarios. Such performance predictions require up-to-date software performance models. This book describes a new integrated reverse engineering approach for the reconstruction of parameterised software performance models (software component architecture and behaviour).
[57] Daniel S. Menasché, Rosa Maria Meri Leão, Edmundo de Souza e Silva, Alberto Avritzer, Sindhu Suresh, Kishor Trivedi, Raymond A. Marie, Lucia Happe, and Anne Koziolek. Survivability analysis of power distribution in smart grids with active and reactive power modeling. In SIGMETRICS Performance Evaluation Review, Martin Arlitt, Niklas Carlsson, and Nidhi Hegde, editors, 2012, volume 40, pages 53-57. ACM, New York, NY, USA. 2012, Special issue on the 2012 GreenMetrics workshop. [ bib | DOI | http | .pdf ]
[58] Norbert Seyff and Anne Koziolek, editors. Modelling and Quality in Requirements Engineering: Essays Dedicated to Martin Glinz on the Occasion of His 60th Birthday. Monsenstein and Vannerdat, Münster, Germany, 2012. [ bib | http | Abstract ]
“Modeling and Quality in Requirements Engineering” is the Festschrift dedicated to Martin Glinz on the occasion of his 60th birthday. Colleagues and friends have sent contributions to honor his achievements in the field of Software and Requirements Engineering. The contributions address specific topics in Martin's main research areas of modeling and quality in requirements engineering. Examples include risk-driven requirements engineering, non-functional requirements and lightweight requirements modeling. Furthermore, they cover related topics such as quality of business processes, SOA, process modeling and testing. Reminiscences and congratulations from fellow researchers and friends conclude the Festschrift.
[59] Wolfgang Theilmann, Sergio Garcia Gomez, John Kennedy, Davide Lorenzoli, Christoph Rathfelder, Thomas Roeblitz, and Gabriele Zacco. A Framework for Multi-level SLA Management. In Handbook of Research on Service-Oriented Systems and Non-Functional Properties: Future Directions, Stephan Reiff-Marganiec and Marcel Tilly, editors, pages 470-490. IGI Global, Hershey, PA, USA, 2012. [ bib | http ]
[60] Marco Vieira, Henrique Madeira, Kai Sachs, and Samuel Kounev. Resilience Benchmarking. In Resilience Assessment and Evaluation of Computing Systems, K. Wolter, A. Avritzer, M. Vieira, and A. van Moorsel, editors, XVIII. Springer-Verlag, Berlin, Heidelberg, 2012. ISBN: 978-3-642-29031-2. [ bib | http | .pdf ]
[61] Alexander Wert. Uncovering Performance Antipatterns by Systematic Experiments. Master's thesis, Karlsruhe Institute of Technology (KIT), Germany, 2012. [ bib | .pdf ]
[62] Alexander Wert, Jens Happe, and Dennis Westermann. Integrating software performance curves with the palladio component model. In Proceedings of the third joint WOSP/SIPEW international conference on Performance Engineering, 2012, pages 283-286. ACM. [ bib | http ]
[63] Dennis Westermann. A generic methodology to derive domain-specific performance feedback for developers. In Proceedings of the 34th International Conference on Software Engineering (ICSE 2012), Doctoral Symposium, Zuerich, Switzerland, 2012. ACM, New York, NY, USA. 2012. [ bib | .pdf ]
[64] Misha Strittmatter and Lucia Happe. Compositional performance abstractions of software connectors. In Proceedings of the 3rd ACM/SPEC International Conference on Performance Engineering (ICPE), Boston, Massachusetts, USA, 2012, pages 275-278. ACM. 2012. [ bib | DOI | slides ]
[65] Andranik Khachatryan, Emmanuel Müller, Christian Stier, and Klemens Böhm. Sensitivity of Self-tuning Histograms: Query Order Affecting Accuracy and Robustness. In Proceedings of the 24th International Conference on Scientific and Statistical Database Management, Chania, Crete, Greece, 2012, SSDBM, pages 334-342. Springer-Verlag, Berlin, Heidelberg. 2012. [ bib | DOI | http ]
[66] Barbara Paech, Robert Heinrich, Gabriele Zorn-Pauli, Andreas Jung, and Siamak Tadjiky. Answering a request for proposal-challenges and proposed solutions. In Requirements Engineering: Foundation for Software Quality, pages 16-29. Springer, 2012. [ bib ]
[67] Robert Heinrich, Jörg Henss, and Barbara Paech. Extending palladio by business process simulation concepts. In Symposium on Software Performance, 2012, pages 19-27. [ bib | .pdf ]
[68] Robert Heinrich, Barbara Paech, Antje Brandner, Ulrike Kutscha, and Björn Bergh. Developing a process quality improvement questionnaire - a case study on writing discharge letters. In Business Process Management Workshops, Florian Daniel, Kamel Barkaoui, and Schahram Dustdar, editors, volume 100 of Lecture Notes in Business Information Processing, pages 261-272. Springer Berlin Heidelberg, 2012. [ bib | DOI | http ]

 TOP

Publications 2011

[1] Benjamin Klatt, Franz Brosch, Zoya Durdik, and Christoph Rathfelder. Quality Prediction in Service Composition Frameworks. In 5th Workshop on Non-Functional Properties and SLA Management in Service-Oriented Computing (NFPSLAM-SOC 2011), Paphos, Cyprus, December 5-8, 2011. [ bib | .pdf | Abstract ]
With the introduction of services, software systems have become more flexible as new services can easily be composed from existing ones. Service composition frameworks offer corresponding functionality and hide the complexity of the underlying technologies from their users. However, possibilities for anticipating quality properties of com- posed services before their actual operation are limited so far. While existing approaches for model-based software quality prediction can be used by service composers for determining realizable Quality of Service (QoS) levels, integration of such techniques into composition frameworks is still missing. As a result, high effort and expert knowledge is required to build the system models required for prediction. In this paper, we present a novel service composition process that includes QoS prediction for composed services as an integral part. Furthermore, we describe how composition frameworks can be extended to support this process. With our approach, systematic consideration of service quality during the composition process is naturally achieved, without the need for de- tailed knowledge about the underlying prediction models. To evaluate our work and validate its applicability in different domains, we have integrated QoS prediction support according to our process in two com- position frameworks - a large-scale SLA management framework and a service mashup platform.
[2] Christoph Rathfelder, Benjamin Klatt, Franz Brosch, and Samuel Kounev. Performance Modeling for Quality of Service Prediction in Service-Oriented Systems. IGI Global, Hershey, PA, USA, December 2011. [ bib | DOI | http | Abstract ]
With the introduction of services, systems become more flexible as new services can easily be composed out of existing services. Services are increasingly used in mission-critical systems and applications and therefore considering Quality of Service (QoS) properties is an essential part of the service selection. Quality prediction techniques support the service provider in determining possible QoS levels that can be guaranteed to a customer or in deriving the operation costs induced by a certain QoS level. In this chapter, we present an overview on our work on modeling service-oriented systems for performance prediction using the Palladio Component Model. The prediction builds upon a model of a service-based system, and evaluates this model in order to determine the expected service quality. The presented techniques allow for early quality prediction, without the need for the system being already deployed and operating. We present the integration of our prediction approach into an SLA management framework. The emerging trend to combine event-based communication and Service-Oriented Architecture (SOA) into Event-based SOA (ESOA) induces new challenges to our approach, which are topic of a special subsection.
[3] Fabian Brosig, Nikolaus Huber, and Samuel Kounev. Automated Extraction of Architecture-Level Performance Models of Distributed Component-Based Systems. In 26th IEEE/ACM International Conference On Automated Software Engineering (ASE 2011), November 2011. Oread, Lawrence, Kansas. Acceptance Rate (Full Paper): 14.7% (37/252). [ bib | .pdf | Abstract ]
Modern service-oriented enterprise systems have increasingly complex and dynamic loosely-coupled architectures that often exhibit poor performance and resource efficiency and have high operating costs. This is due to the inability to predict at run-time the effect of dynamic changes in the system environment and adapt the system configuration accordingly. Architecture-level performance models provide a powerful tool for performance prediction, however, current approaches to modeling the execution context of software components are not suitable for use at run-time. In this paper, we analyze the typical online performance prediction scenarios and propose a novel performance meta-model for expressing and resolving parameter and context dependencies, specifically designed for use in online scenarios. We motivate and validate our approach in the context of a realistic and representative online performance prediction scenario based on the SPECjEnterprise2010 standard benchmark.
[4] Christoph Rathfelder, Samuel Kounev, and David Evans. Capacity Planning for Event-based Systems using Automated Performance Predictions. In 26th IEEE/ACM International Conference On Automated Software Engineering (ASE 2011), Oread, Lawrence, Kansas, November 6-12, 2011, pages 352-361. IEEE. November 2011, Acceptance Rate (Full Paper): 14.7% (37/252). [ bib | .pdf | Abstract ]
Event-based communication is used in different domains including telecommunications, transportation, and business information systems to build scalable distributed systems. The loose coupling of components in such systems makes it easy to vary the deployment. At the same time, the complexity to estimate the behavior and performance of the whole system is increased, which complicates capacity planning. In this paper, we present an automated performance prediction method supporting capacity planning for event-based systems. The performance prediction is based on an extended version of the Palladio Component Model - a performance meta-model for component-based systems. We apply this method on a real-world case study of a traffic monitoring system. In addition to the application of our performance prediction techniques for capacity planning, we evaluate the prediction results against measurements in the context of the case study. The results demonstrate the practicality and effectiveness of the proposed approach.
[5] Westermann Dennis, Krebs Rouven, and Happe Jens. Efficient Experiment Selection in Automated Software Performance Evaluations. In Proceedings of the Computer Performance Engineering - 8th European Performance Engineering Workshop (EPEW 2011), Borrowdale, UK, October 12-13, 2011, pages 325-339. Springer. October 2011. [ bib | .pdf ]
[6] Viliam Simko, David Hauzar, Petr Hnetynka, and Frantisek Plasil. Verifying temporal properties of use-cases in natural language. In Postproceedings of 8th International Symposium on Formal Aspects of Component Software (FACS'11) conference, Oslo, Norway, September 14-16, 2011, LNCS. Springer. September 2011. [ bib | DOI | .pdf | Abstract ]
This paper presents a semi-automated method that helps iteratively write use-cases in natural language and verify consistency of behavior encoded within them. In particular, this is beneficial when the use-cases are created simultaneously by multiple developers. The proposed method allows verifying the consistency of textual use-case specification by employing annotations in use-case steps that are transformed into temporal logic formulae and verified within a formal behavior model. A supporting tool for plain English use-case analysis is currently being enhanced by integrating the verification algorithm proposed in the paper.
[7] Samuel Kounev. Performance Engineering of Business Information Systems - Filling the Gap between High-level Business Services and Low-level Performance Models. In International Symposium on Business Modeling and Software Design (BMSD 2011), Sofia, Bulgaria, July 27-28, 2011, July 2011. [ bib | .pdf ]
[8] Simon Spinner. Evaluating Approaches to Resource Demand Estimation. Master's thesis, Karlsruhe Institute of Technology (KIT), Am Fasanengarten 5, 76131 Karlsruhe, Germany, July 2011. Best Graduate Award from the Faculty of Informatics. [ bib | .pdf ]
[9] Markus von Detten and Steffen Becker. Combining clustering and pattern detection for the reengineering of component-based software systems. In 7th ACM SIGSOFT International Conference on the Quality of Software Architectures (QoSA 2011), June 20-24 2011. [ bib ]
[10] Jens Happe, Heiko Koziolek, and Ralf Reussner. Facilitating performance predictions using software components. Software, IEEE, 28(3):27 -33, June 2011. [ bib | DOI ]
[11] Michael Hauck, Michael Kuperberg, Nikolaus Huber, and Ralf Reussner. Ginpex: Deriving Performance-relevant Infrastructure Properties Through Goal-oriented Experiments. In Proceedings of the 7th ACM SIGSOFT International Conference on the Quality of Software Architectures (QoSA 2011), June 20-24, 2011, pages 53-62. ACM, New York, NY, USA. June 2011. [ bib | DOI | www: | .pdf ]
[12] Benjamin Klatt, Christoph Rathfelder, and Samuel Kounev. Integration of event-based communication in the palladio software quality prediction framework. In Proceedings of the joint ACM SIGSOFT conference - QoSA and ACM SIGSOFT symposium - ISARCS on Quality of software architectures - QoSA and architecting critical systems - ISARCS (QoSA-ISARCS 2011), Boulder, Colorado, USA, June 20-24, 2011, pages 43-52. SIGSOFT, ACM, New York, NY, USA. June 2011. [ bib | DOI | http | .pdf | Abstract ]
Today, software engineering is challenged to handle more and more large-scale distributed systems with guaranteed quality-of-service. Component-based architectures have been established to build such systems in a more structured and manageable way. Modern architectures often utilize event-based communication which enables loosely-coupled interactions between components and leads to improved system scalability. However, the loose coupling of components makes it challenging to model such architectures in order to predict their quality properties, e.g., performance and reliability, at system design time. In this paper, we present an extension of the Palladio Component Model (PCM) and the Palladio software quality prediction framework, enabling the modeling of event-based communication in component-based architectures. The contributions include: i) a meta-model extension supporting events as first class entities, ii) a model-to-model transformation from the extended to the original PCM, iii) an integration of the transformation into the Palladio tool chain allowing to use existing model solution techniques, and iv) a detailed evaluation of the reduction of the modeling effort enabled by the transformation in the context of a real-world case study.
[13] Samuel Kounev, Fabian Brosig, and Nikolaus Huber. Self-Aware QoS Management in Virtualized Infrastructures (Poster Paper). In 8th International Conference on Autonomic Computing (ICAC 2011), Karlsruhe, Germany, June 14-18, 2011. [ bib | .pdf | Abstract ]
We present an overview of our work-in-progress and long-term research agenda aiming to develop a novel methodology for engineering of self-aware software systems. The latter will have built-in architecture-level QoS models enhanced to capture dynamic aspects of the system environment and maintained automatically during operation. The models will be exploited at run-time to adapt the system to changes in the environment ensuring that resources are utilized efficiently and QoS requirements are satisfied.
[14] Christoph Rathfelder and Benjamin Klatt. Palladio workbench: A quality-prediction tool for component-based architectures. In Proceedings of the 2011 Ninth Working IEEE/IFIP Conference on Software Architecture (WICSA 2011), Boulder, Colorado, USA, June 20-24, 2011, pages 347-350. IEEE Computer Society, Washington, DC, USA. June 2011. [ bib | DOI | http | .pdf | Abstract ]
Today, software engineering is challenged to handle more and more large-scale distributed systems with a guaranteed level of service quality. Component-based architectures have been established to build more structured and manageable software systems. However, due to time and cost constraints, it is not feasible to use a trial and error approach to ensure that an architecture meets the quality of service (QoS) requirements. In this tool demo, we present the Palladio Workbench that permits the modeling of component-based software architectures and the prediction of its quality characteristics (e.g., response time and utilization). Additional to a general tool overview, we will give some insights about a new feature to analyze the impact of event-driven communication that was added in the latest release of the Palladio Component Model (PCM)
[15] Benjamin Klatt and Klaus Krogmann. Towards Tool-Support for Evolutionary Software Product Line Development. In 13th Workshop Software-Reengineering (WSR 2011), May 02-04 2011. Bad-Honnef, Germany. [ bib | .pdf ]
[16] Nikolaus Huber, Fabian Brosig, and Samuel Kounev. Model-based Self-Adaptive Resource Allocation in Virtualized Environments. In 6th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS 2011), Waikiki, Honolulu, HI, USA, May 23-24, 2011, pages 90-99. ACM, New York, NY, USA. May 2011, Acceptance Rate (Full Paper): 27% (21/76). [ bib | DOI | http | .pdf | Abstract ]
The adoption of virtualization and Cloud Computing technologies promises a number of benefits such as increased flexibility, better energy efficiency and lower operating costs for IT systems. However, highly variable workloads make it challenging to provide quality-of-service guarantees while at the same time ensuring efficient resource utilization. To avoid violations of service-level agreements (SLAs) or inefficient resource usage, resource allocations have to be adapted continuously during operation to reflect changes in application workloads. In this paper, we present a novel approach to self-adaptive resource allocation in virtualized environments based on online architecture-level performance models. We present a detailed case study of a representative enterprise application, the new SPECjEnterprise2010 benchmark, deployed in a virtualized cluster environment. The case study serves as a proof-of-concept demonstrating the effectiveness and practical applicability of our approach.
[17] Nikolaus Huber, Marcel von Quast, Michael Hauck, and Samuel Kounev. Evaluating and Modeling Virtualization Performance Overhead for Cloud Environments. In Proceedings of the 1st International Conference on Cloud Computing and Services Science (CLOSER 2011), Noordwijkerhout, The Netherlands, May 7-9, 2011, pages 563 - 573. SciTePress. May 2011, Acceptance Rate: 18/164 = 10.9%, Best Paper Award. [ bib | http | .pdf | Abstract ]
Due to trends like Cloud Computing and Green IT, virtualization technologies are gaining increasing importance. They promise energy and cost savings by sharing physical resources, thus making resource usage more efficient. However, resource sharing and other factors have direct effects on system performance, which are not yet well-understood. Hence, performance prediction and performance management of services deployed in virtualized environments like public and private Clouds is a challenging task. Because of the large variety of virtualization solutions, a generic approach to predict the performance overhead of services running on virtualization platforms is highly desirable. In this paper, we present experimental results on two popular state-of-the-art virtualization platforms, Citrix XenServer 5.5 and VMware ESX 4.0, as representatives of the two major hypervisor architectures. Based on these results, we propose a basic, generic performance prediction model for the two different types of hypervisor architectures. The target is to predict the performance overhead for executing services on virtualized platforms.
[18] Benjamin Klatt and Klaus Krogmann. Towards Tool-Support for Evolutionary Software Product Line Development. Softwaretechnik-Trends, 31(2):38-39, May 2011, Köllen Druck & Verlag GmbH, Bad-Honnef, Germany. [ bib | .pdf | Abstract ]
Software vendors often need to vary their products to satisfy customer-specific requirements. In many cases, existing code is reused and adapted to the new project needs. This copy&paste course of action leads a multiproduct code-base that is hard to maintain. Software Product Lines (SPL) emerged as an appropriate concept to manage product families with common functionality and code bases. Evolutionary SPLs, with a product-first-approach and an exposed product line, provide advantages such as a reduced time-to-market and SPLs based on evaluated and proven products.
[19] Samuel Kounev and Simon Spinner. QPME 2.0 User's Guide. Karlsruhe Institute of Technology, Am Fasanengarten 5, 76131 Karlsruhe, Germany, May 2011. [ bib | http | .pdf ]
[20] Michael Faber. Software Performance Analysis using Machine Learning Techniques. Master's thesis, Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany, March 2011. [ bib ]
[21] Samuel Kounev, Konstantin Bender, Fabian Brosig, Nikolaus Huber, and Russell Okamoto. Automated Simulation-Based Capacity Planning for Enterprise Data Fabrics. In 4th International ICST Conference on Simulation Tools and Techniques, Barcelona, Spain, March 21-25, 2011, pages 27-36. ICST, Brussels, Belgium, Belgium. March 2011, Acceptance Rate (Full Paper): 29.8% (23/77), ICST Best Paper Award. [ bib | slides | .pdf | Abstract ]
Enterprise data fabrics are gaining increasing attention in many industry domains including financial services, telecommunications, transportation and health care. Providing a distributed, operational data platform sitting between application infrastructures and back-end data sources, enterprise data fabrics are designed for high performance and scalability. However, given the dynamics of modern applications, system sizing and capacity planning need to be done continuously during operation to ensure adequate quality-of-service and efficient resource utilization. While most products are shipped with performance monitoring and analysis tools, such tools are typically focused on low-level profiling and they lack support for performance prediction and capacity planning. In this paper, we present a novel case study of a representative enterprise data fabric, the GemFire EDF, presenting a simulation-based tool that we have developed for automated performance prediction and capacity planning. The tool, called Jewel, automates resource demand estimation, performance model generation, performance model analysis and results processing. We present an experimental evaluation of the tool demonstrating its effctiveness and practical applicability.
[22] Samuel Kounev, Vittorio Cortellessa, Raffaela Mirandola, and David J. Lilja, editors. ICPE'11 - 2nd Joint ACM/SPEC International Conference on Performance Engineering, Karlsruhe, Germany, March 14-16, 2011, New York, NY, USA, March 2011. ACM. [ bib ]
[23] Fabian Brosig. Online performance prediction with architecture-level performance models. In Software Engineering (Workshops) - Doctoral Symposium, February 21-25, 2011, Ralf Reussner, Alexander Pretschner, and Stefan Jähnichen, editors, February 2011, volume 184 of Lecture Notes in Informatics (LNI), pages 279-284. GI, Bonn, Germany. February 2011. [ bib | .pdf | Abstract ]
Today's enterprise systems based on increasingly complex software architectures often exhibit poor performance and resource efficiency thus having high operating costs. This is due to the inability to predict at run-time the effect of changes in the system environment and adapt the system accordingly. We propose a new performance modeling approach that allows the prediction of performance and system resource utilization online during system operation. We use architecture-level performance models that capture the performance-relevant information of the software architecture, deployment, execution environment and workload. The models will be automatically maintained during operation. To derive performance predictions, we propose a tailorable model solving approach to provide flexibility in view of prediction accuracy and analysis overhead.
[24] Christof Momm and Rouven Krebs. A Qualitative Discussion of Different Approaches for Implementing Multi-Tenant SaaS Offerings short paper. In Proceedings of the Software Engineering 2011 - Workshopband (ESoSyM-2011), Ralf Reussner and Stefan Pretschner, Alexander amd Jähnichen, editors, Karlsruhe, Germany, February 21, 2011, pages 139-150. Fachgruppe OOSE der Gesellschaft für Informatik und ihrer Arbeitskreise, Bonner Köllen Verlag, Bonn-Buschdorf, Germany. February 2011. [ bib | .pdf ]
[25] Matthias Huber, Christian Henrich, J"orn M"uller-Quade, and Carmen Kempka. Towards secure cloud computing through a separation of duties. In Informatik 2011: Informatik schafft Communities, Beiträge der 41. Jahrestagung der Gesellschaft für Informatik e.V. (GI), 4.-7.10.2011, Berlin (Abstract Proceedings), Hans-Ulrich Heiß, Peter Pepper, Holger Schlingloff, and Jörg Schneider, editors, 2011, volume 192 of LNI. GI. 2011. [ bib ]
[26] Matthias Huber and Jörn Müller-Quade. Methods to secure services in an untrusted environment. In Software Engineering 2011: Fachtagung des GI-Fachbereichs Softwaretechnik, 21.-25. Februar 2011 in Karlsruhe, Ralf Reussner, Matthias Grund, Andreas Oberweis, and Walter F. Tichy, editors, 2011, volume 183 of LNI, pages 159-170. GI. 2011. [ bib ]
[27] Dirk Achenbach, Matthias Gabel, and Matthias Huber. Mimosecco: A middleware for secure cloud storage. In Improving Complex Systems Today, Daniel D. Frey, Shuichi Fukuda, and Georg Rock, editors, Advanced Concurrent Engineering, pages 175-181. Springer London, 2011. 10.1007/978-0-85729-799-0_20. [ bib | http ]
[28] Mauricio Alférez, Nuno Amálio, Selim Ciraci, Franck Fleurey, Jörg Kienzle, Jacques Klein, Max Kramer, Sebastien Mosser, Gunter Mussbacher, Ella Roubtsova, and Gefei Zhang. Aspect-oriented model development at different levels of abstraction. In Modelling Foundations and Applications, Robert France, Jochen Kuester, Behzad Bordbar, and Richard Paige, editors, volume 6698 of Lecture Notes in Computer Science, pages 361-376. Springer Berlin / Heidelberg, 2011. [ bib | http | Abstract ]
The last decade has seen the development of diverse aspect-oriented modeling (AOM) approaches. This paper presents eight different AOM approaches that produce models at different level of abstraction. The approaches are different with respect to the phases of the development lifecycle they target, and the support they provide for model composition and verification. The approaches are illustrated by models of the same concern from a case study to enable comparing of their expressive means. Understanding common elements and differences of approaches clarifies the role of aspect-orientation in the software development process.
[29] Steffen Becker. Towards System Viewpoints to Specify Adaptation Models at Runtime. In Proc. of the Software Engineering Conference, Young Researches Track (SE 2011), 2011, volume 31 of Softwaretechnik-Trends. [ bib | .pdf ]
[30] Steffen Becker, Markus von Detten, Christian Heinzemann, and Jan Rieke. Structuring complex story diagrams by polymorphic calls. Technical report, Software Engineering Group, Heinz Nixdorf Institute, 2011. [ bib ]
[31] Frank Brüseke, Gregor Engels, and Steffen Becker. Palladio-based performance blame analysis. In Proc. 16th International Workshop on Component Oriented Programming (WCOP'11), Ralf Reussner, Clemens Szyperski, and Wolfgang Weck, editors, 2011. [ bib ]
[32] Franz Brosch. Service Level Agreements for Cloud Computing, chapter Software Performance and Reliability Prediction, pages 153-164. Springer New York, 2011. [ bib | DOI ]
[33] Franz Brosch, Barbora Buhnova, Heiko Koziolek, and Ralf Reussner. Reliability Prediction for Fault-Tolerant Software Architectures. In International ACM Sigsoft Conference on the Quality of Software Architectures (QoSA), 2011, pages 75-84. ACM, New York, NY, USA. 2011. [ bib | .pdf | Abstract ]
Software fault tolerance mechanisms aim at improving the reliability of software systems. Their effectiveness (i.e., reliability impact) is highly application-specific and depends on the overall system architecture and usage profile. When examining multiple architecture configurations, such as in software product lines, it is a complex and error-prone task to include fault tolerance mechanisms effectively. Existing approaches for reliability analysis of software architectures either do not support modelling fault tolerance mechanisms or are not designed for an efficient evaluation of multiple architecture variants. We present a novel approach to analyse the effect of software fault tolerance mechanisms in varying architecture configurations. We have validated the approach in multiple case studies, including a large-scale industrial system, demonstrating its ability to support architecture design, and its robustness against imprecise input data.
[34] Erik Burger and Ralf Reussner. Performance Certification of Software Components. In Proceedings of the 8th International Workshop on Formal Engineering approaches to Software Components and Architectures (FESCA), 2011, volume 279 of Electronic Notes in Theoretical Computer Science, pages 33-41. Elsevier Science Publishers B. V. 2011. [ bib | slides | .pdf | Abstract ]
Non-functional properties of software should be specified early in the development process. In a distributed process of software development, this means that quality requirements must be made explicit in the specification, and the developing party of a commissioned component needs to deliver not only the implemented component, but also a description of its non-functional properties. Based on these artefacts, a conformance check guarantees that the implemented component fulfills the performance requirements. We extend the notion of model refinement to non-functional properties of software and propose a refinement calculus for conformance checking between abstract performance descriptions of components. The calculus is based on a refinement notion that covers the performance-relevant aspects of components. The approach is applied to the Palladio Component Model as a description language for performance properties of components.
[35] Zoya Durdik. A Proposal on Validation of an Agile Architecture-Modelling Process. In Proceedings of Software Engineering 2011 (SE2011), Doktoranden-Symposium, 2011. [ bib ]
[36] Zoya Durdik. Towards a process for architectural modelling in agile software development. In Proceedings of the Seventh International ACM Sigsoft Conference on the Quality of Software Architectures (QoSA 2011), 2011. Boulder, Colorado, USA. [ bib ]
[37] Zoya Durdik. An architecture-centric approach for goal-driven requirements elicitation. In Proceedings of the European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering 2011 (ESEC/FSE 2011), Doctoral Symposium, 2011. Szeged, Hungary. [ bib ]
[38] Zoya Durdik, Jens Drawehn, and Matthias Herbert. Towards automated service quality prediction for development of enterprise mashups. In 5th International Workshop on Web APIs and Service Mashups @ ECOWS 2011, 2011. Lugano, Switzerland. [ bib ]
[39] Daniel Dominguez Gouvêa, Cyro Muniz, Gilson Pinto, Alberto Avritzer, Rosa Maria Meri Leão, Edmundo de Souza e Silva, Morganna Carmem Diniz, Luca Berardinelli, Julius C. B. Leite, Daniel Mossé, Yuanfang Cai, Mike Dalton, Lucia Kapova, and Anne Koziolek. Experience building non-functional requirement models of a complex industrial architecture. In Proceedings of the second joint WOSP/SIPEW international conference on Performance engineering (ICPE 2011), Samuel Kounev, Vittorio Cortellessa, Raffaela Mirandola, and David J. Lilja, editors, Karlsruhe, Germany, 2011, pages 43-54. ACM, New York, NY, USA. 2011. [ bib | DOI | http | .pdf ]
[40] Henning Groenda. An Accuracy Information Annotation Model for Validated Service Behavior Specifications. In Models in Software Engineering, Juergen Dingel and Arnor Solberg, editors, volume 6627 of Lecture Notes in Computer Science, pages 369-383. Springer Berlin / Heidelberg, 2011. 10.1007/978-3-642-21210-9_36. [ bib | http | Abstract ]
Assessing providable service levels based on model-driven prediction approaches requires valid service behavior specifications. Such specifications must be suitable for the requested usage profile and available hardware to make correct predictions and decisions on providable service levels. Assessing the precision of given parameterized performance specifications is often done manually in an ad-hoc way based on the experience of the performance engineer. In this paper, we show how the accuracy of a specification can be assessed and stated and how validation settings of model-based testing can ease precision assessments. The applicability of the approach is shown on a case study. We demonstrate how our approach allows accuracy statements and can be used in combination with usage profile and platform independent performance validations, as well as point out how accuracy assessments are eased.
[41] Michael Hauck, Jens Happe, and Ralf Reussner. Towards Performance Prediction for Cloud Computing Environments Based on Goal-oriented Measurements. In Proceedings of the 1st International Conference on Cloud Computing and Services Science (CLOSER 2011), 2011, pages 616-622. SciTePress. 2011. [ bib | http | .pdf ]
[42] Nikolas Roman Herbst. Quantifying the Impact of Configuration Space for Elasticity Benchmarking. Study Thesis, Karlsruhe Institute of Technology (KIT), Am Fasanengarten 5, 76131 Karlsruhe, Germany, 2011. [ bib | .pdf | Abstract ]
Elasticity is the ability of a software system to dynamically adapt the amount of the resources it provides to clients as their workloads increase or decrease. In the context of cloud computing, automated resizing of a virtual machine's resources can be considered as a key step towards optimisation of a system's cost and energy efficiency. Existing work on cloud computing is limited to the technical view of implementing elastic systems, and definitions of scalability have not been extended to cover elasticity. This study thesis presents a detailed discussion of elasticity, proposes metrics as well as measurement techniques, and outlines next steps for enabling comparisons between cloud computing offerings on the basis of elasticity. I discuss results of our work on measuring elasticity of thread pools provided by the Java virtual machine, as well as an experiment setup for elastic CPU time slice resizing in a virtualized environment. An experiment setup is presented as future work for dynamically adding and removing z/VM Linux virtual machine instances to a performance relevant group of virtualized servers.
[43] Georg Hinkel. Metrics for comparing response time distributions. Bachelor thesis, Karlsruhe Institute of Technology, 2011. [ bib | .pdf ]
[44] Lucia Kapova. Reusable QoS Specifications for Systematic Component-based Design. In ICPE'11: Proceedings of the 2nd ACM/SPEC International Conference on Performance Engineering, 2011. [ bib ]
[45] Samuel Kounev. Engineering of Self-Aware IT Systems and Services: State-of-the-Art and Research Challenges. In Proceedings of the 8th European Performance Engineering Workshop (EPEW'11), Borrowdale, The English Lake District, October 12-13, 2011. (Keynote Talk). [ bib | .pdf ]
[46] Samuel Kounev. Self-Aware Software and Systems Engineering: A Vision and Research Roadmap. In GI Softwaretechnik-Trends, 31(4), November 2011, ISSN 0720-8928, 2011. Karlsruhe, Germany. [ bib | .html | .pdf ]
[47] Heiko Koziolek, Steffen Becker, Jens Happe, and Paul Pettersson. Quality of service-oriented software systems (QUASOSS 2010). Models in Software Engineering, pages 364-368, 2011, Springer. [ bib ]
[48] Anne Koziolek. Automated Improvement of Software Architecture Models for Performance and Other Quality Attributes. PhD thesis, Institut für Programmstrukturen und Datenorganisation (IPD), Karlsruher Institut für Technologie, Karlsruhe, Germany, 2011. [ bib | http | .pdf ]
[49] Anne Koziolek, Heiko Koziolek, and Ralf Reussner. PerOpteryx: automated application of tactics in multi-objective software architecture optimization. In Joint proceedings of the Seventh International ACM SIGSOFT Conference on the Quality of Software Architectures and the 2nd ACM SIGSOFT International Symposium on Architecting Critical Systems (QoSA-ISARCS 2011), Ivica Crnkovic, Judith A. Stafford, Dorina C. Petriu, Jens Happe, and Paola Inverardi, editors, Boulder, Colorado, USA, 2011, pages 33-42. ACM, New York, NY, USA. 2011. [ bib | DOI | http | .pdf | Abstract ]
Designing software architectures that exhibit a good trade-off between multiple quality attributes is hard. Even with a given functional design, many degrees of freedom in the software architecture (e.g. component deployment or server configuration) span a large design space. In current practice, software architects try to find good solutions manually, which is time-consuming, can be error-prone and can lead to suboptimal designs. We propose an automated approach guided by architectural tactics to search the design space for good solutions. Our approach applies multi-objective evolutionary optimization to software architectures modelled with the Palladio Component Model. Software architects can then make well-informed trade-off decisions and choose the best architecture for their situation. To validate our approach, we applied it to the architecture models of two systems, a business reporting system and an industrial control system from ABB. The approach was able to find meaningful trade-offs leading to significant performance improvements or costs savings. The novel use of tactics decreased the time needed to find good solutions by up to 80%.
[50] Anne Koziolek, Qais Noorshams, and Ralf Reussner. Focussing multi-objective software architecture optimization using quality of service bounds. In Models in Software Engineering, Workshops and Symposia at MODELS 2010, Oslo, Norway, October 3-8, 2010, Reports and Revised Selected Papers, J. Dingel and A. Solberg, editors, 2011, volume 6627 of Lecture Notes in Computer Science, pages 384-399. Springer-Verlag Berlin Heidelberg. 2011. [ bib | DOI | http | .pdf | Abstract ]
Quantitative prediction of non-functional properties, such as performance, reliability, and costs, of software architectures supports systematic software engineering. Even though there usually is a rough idea on bounds for quality of service, the exact required values may be unclear and subject to trade-offs. Designing architectures that exhibit such good trade-off between multiple quality attributes is hard. Even with a given functional design, many degrees of freedom in the software architecture (e.g. component deployment or server configuration) span a large design space. Automated approaches search the design space with multi-objective metaheuristics such as evolutionary algorithms. However, as quality prediction for a single architecture is computationally expensive, these approaches are time consuming. In this work, we enhance an automated improvement approach to take into account bounds for quality of service in order to focus the search on interesting regions of the objective space, while still allowing trade-offs after the search. We compare two different constraint handling techniques to consider the bounds. To validate our approach, we applied both techniques to an architecture model of a component-based business information system. We compared both techniques to an unbounded search in 4 scenarios. Every scenario was examined with 10 optimization runs, each investigating around 1600 architectural candidates. The results indicate that the integration of quality of service bounds during the optimization process can improve the quality of the solutions found, however, the effect depends on the scenario, i.e. the problem and the quality requirements. The best results were achieved for costs requirements: The approach was able to decrease the time needed to find good solutions in the interesting regions of the objective space by 25% on average.
[51] Anne Koziolek and Ralf Reussner. Towards a generic quality optimisation framework for component-based system models. In Proceedings of the 14th international ACM Sigsoft symposium on Component based software engineering, Ivica Crnkovic, Judith A. Stafford, Antonia Bertolino, and Kendra M. L. Cooper, editors, Boulder, Colorado, USA, 2011, CBSE '11, pages 103-108. ACM, New York, NY, USA, New York, NY, USA. 2011. [ bib | DOI | http | .pdf | Abstract ]
Designing component-based systems (CBS) that exhibit a good trade-off between multiple quality criteria is hard. Even after functional design, many remaining degrees of freedom of different types (e.g. component allocation, component selection, server configuration) in the CBS span a large, discontinuous design space. Automated approaches have been proposed to optimise CBS models, but they only consider a limited set of degrees of freedom, e.g. they only optimise the selection of components without considering the allocation, or vice versa. We propose a flexible and extensible formulation of the design space for optimising any CBS model for a number of quality properties and an arbitrary number of degrees of freedom. With this design space formulation, a generic quality optimisation framework that is independent of the used CBS metamodel can apply multi-objective metaheuristic optimisation such as evolutionary algorithms.
[52] Heiko Koziolek, Bastian Schlich, Carlos Bilich, Roland Weiss, Steffen Becker, Klaus Krogmann, Mircea Trifu, Raffaela Mirandola, and Anne Koziolek. An industrial case study on quality impact prediction for evolving service-oriented software. In Proceeding of the 33rd international conference on Software engineering (ICSE 2011), Software Engineering in Practice Track, Richard N. Taylor, Harald Gall, and Nenad Medvidovic, editors, Waikiki, Honolulu, HI, USA, 2011, pages 776-785. ACM, New York, NY, USA. 2011, Acceptance Rate: 18% (18/100). [ bib | DOI | http | Abstract ]
Systematic decision support for architectural design decisions is a major concern for software architects of evolving service-oriented systems. In practice, architects often analyse the expected performance and reliability of design alternatives based on prototypes or former experience. Modeldriven prediction methods claim to uncover the tradeoffs between different alternatives quantitatively while being more cost-effective and less error-prone. However, they often suffer from weak tool support and focus on single quality attributes. Furthermore, there is limited evidence on their effectiveness based on documented industrial case studies. Thus, we have applied a novel, model-driven prediction method called Q-ImPrESS on a large-scale process control system consisting of several million lines of code from the automation domain to evaluate its evolution scenarios. This paper reports our experiences with the method and lessons learned. Benefits of Q-ImPrESS are the good architectural decision support and comprehensive tool framework, while one drawback is the time-consuming data collection.
[53] Heiko Koziolek, Roland Weiss, Zoya Durdik, Johannes Stammel, and Klaus Krogmann. Towards Software Sustainability Guidelines for Long-living Industrial Systems. In Proceedings of Software Engineering (Workshops), 3rd Workshop of GI Working Group Long-living Software Systems (L2S2), Design for Future, 2011, volume 184 of LNI, pages 47-58. GI. 2011. [ bib | .pdf | Abstract ]
Long-living software systems are sustainable if they can be cost-effectively maintained and evolved over their complete life-cycle. Software-intensive systems in the industrial automation domain are typically long-living and cause high evolution costs, because of new customer requirements, technology changes, and failure reports. Many methods for sustainable software development have been proposed in the scientific literature, but most of them are not applied in industrial practice. We identified typical evolution scenarios in the industrial automation domain and conducted an extensive literature search to extract a number of guidelines for sustainable software development based on the methods found in literature. For validation purposes, we map one evolution scenario to these guidelines in this paper.
[54] Max E. Kramer and Jörg Kienzle. Mapping aspect-oriented models to aspect-oriented code. In Models in Software Engineering, Juergen Dingel and Arnor Solberg, editors, volume 6627 of Lecture Notes in Computer Science, pages 125-139. Springer Berlin / Heidelberg, 2011. [ bib | http | Abstract ]
When aspect-oriented modeling techniques are used in the context of Model-Driven Engineering, a possible way of obtaining an executable from an aspect-oriented model is to map it to code written in an aspect-oriented programming language. This paper outlines the most important challenges that arise when defining such a mapping: mapping structure and behavior of a single aspect, mapping instantiation of structure and behavior in target models, mapping conflict resolution between aspects, and mapping aspect dependencies and variability. To explain these mapping issues, our paper presents details on how to map Reusable Aspect Models (RAM) to AspectJ source code. The ideas are illustrated by presenting example models and corresponding mapped code from the AspectOptima case study.
[55] Michael Kuperberg, Nikolas Roman Herbst, Joakim Gunnarson von Kistowski, and Ralf Reussner. Defining and Quantifying Elasticity of Resources in Cloud Computing and Scalable Platforms. Technical report, Karlsruhe Institute of Technology (KIT), Am Fasanengarten 5, 76131 Karlsruhe, Germany, 2011. [ bib | http | .pdf | Abstract ]
Elasticity is the ability of a software system to dynamically scale the amount of the resources it provides to clients as their workloads increase or decrease. Elasticity is praised as a key advantage of cloud computing, where computing resources are dynamically added and released. However, there exists no concise or formal definition of elasticity, and thus no approaches to quantify it have been developed so far. Existing work on cloud computing is limited to the technical view of implementing elastic systems, and definitions or scalability have not been extended to cover elasticity. In this report, we present a detailed discussion of elasticity, propose techniques for quantifying and measuring it, and outline next steps to be taken for enabling comparisons between cloud computing offerings on the basis of elasticity. We also present preliminary work on measuring elasticity of resource pools provided by the Java Virtual Machine.
[56] Michael Kuperberg, Martin Krogmann, and Ralf Reussner. Metric-based Selection of Timer Methods for Accurate Measurements. In Proceedings of the 2nd ACM/SPEC International Conference on Performance Engineering, Karlsruhe, Germany, 2011, ICPE '11, pages 151-156. ACM, New York, NY, USA. 2011. [ bib | DOI | http | .pdf | Abstract ]
Performance measurements are often concerned with accurate recording of timing values, which requires timer methods of high quality. Evaluating the quality of a given timer method or performance counter involves analysing several properties, such as accuracy, invocation cost and timer stability. These properties are metrics with platform-dependent values, and ranking and selecting timer methods requires comparisons using multidimensional metric sets, which make the comparisons ambiguous and unnecessary complex. To solve this problem, this paper proposes a new unified metric that allows for a simpler comparison. The one-dimensional metric is designed to capture fine-granular differences between timer methods, and normalises accuracy and other quality attributes by using CPU cycles instead of time units. The proposed metric is evaluated on all timer methods provided by Java and .NET platform APIs.
[57] Michael Kuperberg and Ralf Reussner. Analysing the Fidelity of Measurements Performed With Hardware Performance Counters. In Proceedings of the International Conference on Software Engineering 2011 (ICPE'11), March 14-16, 2011, Karlsruhe, Germany, 2011. [ bib | .pdf | Abstract ]
Performance evaluation requires accurate and dependable measurements of timing values. Such measurements are usually made using timer methods, but these methods are often too coarse-grained and too inaccurate. Thus, direct usage of hardware performance counters is frequently used for fine-granular measurements due to higher accuracy. However, direct access to these counters may be misleading on multicore computers because cores can be paused or core affinity changed by the operating system, resulting in misleading counter values. The contribution of this paper is the demonstration of an additional, significant flaw arising from the direct use of hardware performance counters. We demonstrate that using JNI and assembler instructions to access the Timestamp Counter from Java applications can result in grossly wrong values, even in single-threaded scenarios.
[58] Anne Martens, Heiko Koziolek, Lutz Prechelt, and Ralf Reussner. From monolithic to component-based performance evaluation of software architectures. Empirical Software Engineering, 16(5):587-622, 2011, Springer Netherlands. [ bib | DOI | http | .pdf | Abstract ]
Background: Model-based performance evaluation methods for software architectures can help architects to assess design alternatives and save costs for late life-cycle performance fixes. A recent trend is component-based performance modelling, which aims at creating reusable performance models; a number of such methods have been proposed during the last decade. Their accuracy and the needed effort for modelling are heavily influenced by human factors, which are so far hardly understood empirically.

Objective: Do component-based methods allow to make performance predictions with a comparable accuracy while saving effort in a reuse scenario? We examined three monolithic methods (SPE, umlPSI, Capacity Planning (CP)) and one component-based performance evaluation method (PCM) with regard to their accuracy and effort from the viewpoint of method users.

Methods: We conducted a series of three experiments (with different levels of control) involving 47 computer science students. In the first experiment, we compared the applicability of the monolithic methods in order to choose one of them for comparison. In the second experiment, we compared the accuracy and effort of this monolithic and the component-based method for the model creation case. In the third, we studied the effort reduction from reusing component-based models. Data were collected based on the resulting artefacts, questionnaires and screen recording. They were analysed using hypothesis testing, linear models, and analysis of variance.

Results: For the monolithic methods, we found that using SPE and CP resulted in accurate predictions, while umlPSI produced over-estimates. Comparing the component-based method PCM with SPE, we found that creating reusable models using PCM takes more (but not drastically more) time than using SPE and that participants can create accurate models with both techniques. Finally, we found that reusing PCM models can save time, because effort to reuse can be explained by a model that is independent of the inner complexity of a component.

Limitations: The tasks performed in our experiments reflect only a subset of the actual activities when applying model-based performance evaluation methods in a software development process.

Conclusions: Our results indicate that sufficient prediction accuracy can be achieved with both monolithic and component-based methods, and that the higher effort for component-based performance modelling will indeed pay off when the component models incorporate and hide a sufficient amount of complexity.

[59] Philipp Meier, Samuel Kounev, and Heiko Koziolek. Automated Transformation of Component-based Software Architecture Models to Queueing Petri Nets. In 19th IEEE/ACM International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems (MASCOTS 2011), Singapore, July 25-27, 2011. Acceptance Rate (Full Paper): 41/157 = 26%. [ bib | .pdf ]
[60] Philipp Merkle. Comparing process- and event-oriented software performance simulation. Master's thesis, Karlsruhe Institute of Technology (KIT), Germany, 2011. [ bib | .pdf ]
[61] Philipp Merkle and Jörg Henss. EventSim - an event-driven Palladio software architecture simulator. In Palladio Days 2011 Proceedings (appeared as technical report), Steffen Becker, Jens Happe, and Ralf Reussner, editors, 2011, Karlsruhe Reports in Informatics ; 2011,32, pages 15-22. KIT, Fakultät für Informatik, Karlsruhe. 2011. [ bib | http ]
[62] Christof Momm, Stefan Sauer, and Mircea Trifu. Dritter workshop zu "design for future (dff 2011) - workshop report". In Software Engineering 2011 - Workshopband (inkl. Doktorandensymposium), Fachtagung des GI-Fachbereichs Softwaretechnik, 2011, volume 184 of LNI, pages 3-8. GI. 2011. [ bib ]
[63] Christoph Rathfelder, Benjamin Klatt, and Giovanni Falcone. The Open Reference Case A Reference Use Case for the SLA@SOI Framework. Springer, New York, 2011. [ bib | http ]
[64] Ralf Reussner, Steffen Becker, Erik Burger, Jens Happe, Michael Hauck, Anne Koziolek, Heiko Koziolek, Klaus Krogmann, and Michael Kuperberg. The Palladio Component Model. Technical report, KIT, Fakultät für Informatik, Karlsruhe, 2011. [ bib | http | Abstract ]
This report introduces the Palladio Component Model (PCM), a novel software component model for business information systems, which is specifically tuned to enable model-driven quality-of-service (QoS, i.e., performance and reliability) predictions. The PCMs goal is to assess the expected response times, throughput, and resource utilization of component-based software architectures during early development stages. This shall avoid costly redesigns, which might occur after a poorly designed architecture has been implemented. Software architects should be enabled to analyse different architectural design alternatives and to support their design decisions with quantitative results from performance or reliability analysis tools.
[65] Stefan Sauer, Christof Momm, and Mircea Trifu. Dritter workshop zu "design for future - langlebige softwaresysteme". In Software Engineering 2011: Fachtagung des GI-Fachbereichs Softwaretechnik, 2011, volume 183 of LNI, page 197. GI. 2011. [ bib ]
[66] Johannes Stammel, Zoya Durdik, Klaus Krogmann, Roland Weiss, and Heiko Koziolek. Software Evolution for Industrial Automation Systems: Literature Overview. Karlsruhe Reports in Informatics 2011,2, Karlsruhe, Germany, 2011. [ bib | http | .pdf | Abstract ]
In this document we collect and classify literature with respect to software evolution. The main objective is to get an overview of approaches for the evolution of sustainable software systems with focus on the domain of industrial process control systems.
[67] Johannes Stammel and Mircea Trifu. Tool-supported estimation of software evolution effort in service-oriented systems. In Joint Proceedings of the First International Workshop on Model-Driven Software Migration (MDSM 2011) and the Fifth International Workshop on Software Quality and Maintainability (SQM 2011), 2011, volume 708, pages 56-63. CEUR-WS.org. 2011. [ bib ]
[68] Nigel Thomas, Jeremy Bradley, William Knottenbelt, Samuel Kounev, Nikolaus Huber, and Fabian Brosig. Preface. Electronic Notes in Theoretical Computer Science, 275:1 - 3, 2011, Elsevier Science Publishers B. V., Amsterdam, The Netherlands. [ bib | DOI ]
[69] Oleg Travkin, Markus von Detten, and Steffen Becker. Towards the combination of clustering-based and pattern-based reverse engineering approaches. In Proceedings of the 3rd Workshop of the GI Working Group L2S2 - Design for Future 2011, 2011. [ bib ]
[70] Catia Trubiani and Anne Koziolek. Detection and solution of software performance antipatterns in Palladio architectural models. In Proceeding of the second joint WOSP/SIPEW international conference on Performance engineering, Samuel Kounev, Vittorio Cortellessa, Raffaela Mirandola, and David J. Lilja, editors, Karlsruhe, Germany, 2011, ICPE '11, pages 19-30. ACM, New York, NY, USA. 2011, ICPE best paper award. [ bib | DOI | http | .pdf | Abstract ]
Antipatterns are conceptually similar to patterns in that they document recurring solutions to common design problems. Performance Antipatterns document, from a performance perspective, common mistakes made during software development as well as their solutions. The definition of performance antipatterns concerns software properties that can include static, dynamic, and deployment aspects. Currently, such knowledge is only used by domain experts; the problem of automatically detecting and solving antipatterns within an architectural model has not been experimented yet. In this paper we present an approach to automatically detect and solve software performance antipatterns within the Palladio architectural models: the detection of an antipattern provides a software performance feedback to designers, since it suggests the architectural alternatives that actually allow to overcome specific performance problems. We implemented the approach and a case study is presented to demonstrate its validity. The system performance under study has been improved of 50% by applying antipatterns' solutions.
[71] Dennis Westermann and Jens Happe. Performance Cockpit: Systematic Measurements and Analyses. In ICPE'11: Proceedings of the 2nd ACM/SPEC International Conference on Performance Engineering, Karlsruhe, Germany, 2011. ACM, New York, NY, USA. 2011. [ bib | http ]
[72] Ralf Reussner, Matthias Grund, Andreas Oberweis, and Walter F. Tichy, editors. Software Engineering 2011: Fachtagung des GI-Fachbereichs Softwaretechnik, 21.-25. Februar 2011 in Karlsruhe, volume 183 of LNI. GI, 2011. [ bib ]
[73] Misha Strittmatter. Performance abstractions of communication patterns for connectors. Study thesis, Karlsruhe Institute of Technology (KIT), Germany, January 2011. [ bib | slides | .pdf ]
[74] Thomas Goldschmidt. View-based textual modelling. PhD thesis, Karlsruhe, 2011. [ bib | http ]
[75] Christian Stier. Enhanced Selectivity Estimation using Subspace Clustering. Bachelor's thesis, Karlsruhe Institute of Technology (KIT), Am Fasanengarten 5, 76131 Karlsruhe, Germany, 2011. [ bib ]
[76] Robert Heinrich, Alexander Kappe, and Barbara Paech. Modeling quality information within business process models. In Proceedings of the 4th SQMB Workshop, TUM-I1104, 2011, pages 4-13. [ bib | .pdf ]
[77] Robert Heinrich, Alexander Kappe, and Barbara Paech. Tool support for the comprehensive modeling of quality information within business process models. In Enterprise Modelling and Information Systems Architecture, 2011, pages 213-218. [ bib | .pdf ]
[78] Markus Birkle, Benjamin Schneider, Tobias Beck, Thomas Deuster, Markus Fischer, Florian Flatow, Robert Heinrich, Christian Kapp, Jasmin Riemer, Michael Simon, and Björn Bergh. Implementation of an open source provider organization registry service. Studies in Health Technology and Informatics, 169:265-269, 2011, IOS Press. [ bib ]

 TOP

Publications 2010

2010
[1] Frank Eichinger, Matthias Huber, and Klemens Böhm. On the Usefulness of Weight-Based Constraints in Frequent Subgraph Mining. In Proceedings of the 30th BCS SGAI International Conference on Innovative Techniques and Applications of Artificial Intelligence (AI), Max Bramer, Miltos Petridis, and Adrian Hopgood, editors, December 2010. BCS SGAI, Springer London, UK, Cambridge, UK. December 2010. [ bib | .pdf | Abstract ]
Frequent subgraph mining is an important data-mining technique. In this paper we look at weighted graphs, which are ubiquitous in the real world. The analysis of weights in combination with mining for substructures might yield more precise results. In particular, we study frequent subgraph mining in the presence of weight-based constraints and explain how to integrate them into mining algorithms. While such constraints only yield approximate mining results in most cases, we demonstrate that such results are useful nevertheless and explain this effect. To do so, we both assess the completeness of the approximate result sets, and we carry out application-oriented studies with real-world data-analysis problems: software-defect localization and explorative mining in transportation logistics. Our results are that the runtime can improve by a factor of up to 3.5 in defect localization and 7 in explorative mining. At the same time, we obtain an even slightly increased defect-localization precision and obtain good explorative mining results.
[2] Marco Comuzzi, Constantinos Kotsokalis, Christoph Rathfelder, Wolfgang Theilmann, Ulrich Winkler, and Gabriele Zacco. A framework for multi-level sla management. In Service-Oriented Computing. ICSOC/ServiceWave 2009 Workshops, Asit Dan, Frédéric Gittler, and Farouk Toumani, editors, Stockholm, Sweden, November 23-27, 2010, volume 6275 of Lecture Notes in Computer Science, pages 187-196. Springer, Berlin, Heidelberg. November 2010. [ bib | DOI | http | .pdf | Abstract ]
Service-Oriented Architectures (SOA) represent an architectural shift for building business applications based on loosely-coupled services. In a multi-layered SOA environment the exact conditions under which services are to be delivered can be formally specified by Service Level Agreements (SLAs). However, typical SLAs are just specified at the customer-level and do not allow service providers to manage their IT stack accordingly as they have no insight on how customer-level SLAs translate to metrics or parameters at the various layers of the IT stack. In this paper we present a technical architecture for a multi-level SLA management framework. We discuss the fundamental components and in- terfaces in this architecture and explain the developed integrated framework. Furthermore, we show results from a qualitative evaluation of the framework in the context of an open reference case.
[3] Nikolaus Huber, Marcel von Quast, Fabian Brosig, and Samuel Kounev. Analysis of the Performance-Influencing Factors of Virtualization Platforms. In The 12th International Symposium on Distributed Objects, Middleware, and Applications (DOA 2010), Crete, Greece, October 26, 2010. Springer Verlag, Crete, Greece. October 2010, Acceptance Rate (Full Paper): 33%. [ bib | .pdf | Abstract ]
Nowadays, virtualization solutions are gaining increasing importance. By enabling the sharing of physical resources, thus making resource usage more efficient, they promise energy and cost savings. Additionally, virtualization is the key enabling technology for Cloud Computing and server consolidation. However, the effects of sharing resources on system performance are not yet well-understood. This makes performance prediction and performance management of services deployed in such dynamic systems very challenging. Because of the large variety of virtualization solutions, a generic approach to predict the performance influences of virtualization platforms is highly desirable. In this paper, we present a hierarchical model capturing the major performance-relevant factors of virtualization platforms. We then propose a general methodology to quantify the influence of the identified factors based on an empirical approach using benchmarks. Finally, we present a case study of Citrix XenServer 5.5, a state-of-the-art virtualization platform.
[4] Rouven Krebs. Combination of measurement and model based approaches for performance prediction in service oriented systems. Master's thesis, University of Applied Sciences Karlsruhe, Moltkestr. 30, 76133 Karlsruhe, Germany, October 2010. [ bib ]
[5] Christoph Rathfelder, David Evans, and Samuel Kounev. Predictive Modelling of Peer-to-Peer Event-driven Communication in Component-based Systems. In Proceedings of the 7th European Performance Engineering Workshop (EPEW 2010), Alessandro Aldini, Marco Bernardo, Luciano Bononi, and Vittorio Cortellessa, editors, Bertinoro, Italy, September 23-24, 2010, volume 6342 of Lecture Notes in Computer Science (LNCS), pages 219-235. Springer-Verlag, Berlin, Heidelberg. September 2010. [ bib | .pdf | Abstract ]
The event-driven communication paradigm is used increasingly often to build loosely-coupled distributed systems in many industry domains including telecommunications, transportation, and supply chain management. However, the loose coupling of components in such systems makes it hard for developers to estimate their behaviour and performance under load. Most general purpose performance meta-models for component-based systems provide limited support for modelling event-driven communication. In this paper, we present a case study of a real-life road traffic monitoring system that shows how event-driven communication can be modelled for performance prediction and capacity planning. Our approach is based on the Palladio Component Model (PCM) which we have extended to support event-driven communication. We evaluate the accuracy of our modelling approach in a number of different workload and configuration scenarios. The results demonstrate the practicality and effectiveness of the proposed approach.
[6] Jens Happe, Steffen Becker, Christoph Rathfelder, Holger Friedrich, and Ralf H. Reussner. Parametric Performance Completions for Model-Driven Performance Prediction. Performance Evaluation (PE), 67(8):694-716, August 2010, Elsevier. [ bib | DOI | http | .pdf | Abstract ]
Performance prediction methods can help software architects to identify potential performance problems, such as bottlenecks, in their software systems during the design phase. In such early stages of the software life-cycle, only a little information is available about the system�s implementation and execution environment. However, these details are crucial for accurate performance predictions. Performance completions close the gap between available high-level models and required low-level details. Using model-driven technologies, transformations can include details of the implementation and execution environment into abstract performance models. However, existing approaches do not consider the relation of actual implementations and performance models used for prediction. Furthermore, they neglect the broad variety of possible implementations and middleware platforms, possible configurations, and possible usage scenarios. In this paper, we (i) establish a formal relation between generated performance models and generated code, (ii) introduce a design and application process for parametric performance completions, and (iii) develop a parametric performance completion for Message-oriented Middleware according to our method. Parametric performance completions are independent of a specific platform, reflect performance-relevant software configurations, and capture the influence of different usage scenarios. To evaluate the prediction accuracy of the completion for Message-oriented Middleware, we conducted a real-world case study with the SPECjms2007 Benchmark [http://www.spec.org/jms2007/]. The observed deviation of measurements and predictions was below 10% to 15%
[7] Samuel Kounev. Engineering of Next Generation Self-Aware Software Systems: A Research Roadmap. In Emerging Research Directions in Computer Science. Contributions from the Young Informatics Faculty in Karlsruhe. KIT Scientific Publishing, Karlsruhe, Germany, July 2010. [ bib | http | .pdf ]
[8] Samuel Kounev, Fabian Brosig, Nikolaus Huber, and Ralf Reussner. Towards self-aware performance and resource management in modern service-oriented systems. In Proceedings of the 7th IEEE International Conference on Services Computing (SCC 2010), July 5-10, Miami, Florida, USA, Miami, Florida, USA, July 5-10, 2010. IEEE Computer Society. July 2010. [ bib | .pdf | Abstract ]
Modern service-oriented systems have increasingly complex loosely-coupled architectures that often exhibit poor performance and resource efficiency and have high operating costs. This is due to the inability to predict at run-time the effect of dynamic changes in the system environment (e.g., varying service workloads) and adapt the system configuration accordingly. In this paper, we describe a long-term vision and approach for designing systems with built-in self-aware performance and resource management capabilities. We advocate the use of architecture-level performance models extracted dynamically from the evolving system configuration and maintained automatically during operation. The models will be exploited at run-time to adapt the system to changes in the environment ensuring that resources are utilized efficiently and performance requirements are continuously satisfied.
[9] Christoph Rathfelder, Benjamin Klatt, Samuel Kounev, and David Evans. Towards middleware-aware integration of event-based communication into the palladio component model. In Proceedings of the Fourth ACM International Conference on Distributed Event-Based Systems (DEBS 2010), Cambridge, United Kingdom, July 12-15, 2010, pages 97-98. ACM, New York, NY, USA. July 2010. [ bib | DOI | http | .pdf | Abstract ]
The event-based communication paradigm is becoming increasingly ubiquitous as an enabling technology for building loosely-coupled distributed systems. However, the loose coupling of components in such systems makes it hard for developers to predict their performance under load. Most general purpose performance meta-models for component-based systems provide limited support for modelling event-based communication and neglect middleware-specific influence factors. In this poster, we present an extension of our approach to modelling event-based communication in the context of the Palladio Component Model (PCM), allowing to take into account middleware-specific influence factors. The latter are captured in a separate model automatically woven into the PCM instance by means of a model-to-model transformation. As a second contribution, we present a short case study of a real-life road traffic monitoring system showing how event-based communication can be modelled for performance prediction and capacity planning.
[10] Robert Vaupel. Method for Controlling the Capacity Usage of a Logically Partitioned Data Processing System. Patent No. 7752415, United States, July 2010. [ bib ]
[11] Erik Burger. Towards formal certification of software components. In Proceedings of the Fifteenth International Workshop on Component-Oriented Programming (WCOP) 2010, Barbora Bühnová, Ralf H. Reussner, Clemens Szyperski, and Wolfgang Weck, editors, June 2010, volume 2010-14 of Interne Berichte, pages 15-22. Karlsruhe Institue of Technology, Faculty of Informatics, Karlsruhe, Germany. June 2010. [ bib | slides | .pdf | Abstract ]
Software certification as it is practised today guarantees that certainstandards are kept in the process of software development. However, thisdoes not make any statements about the actual quality of implemented code.We propose an approach to certify the non-functional properties of component-based software which is based on a formal refinement calculus, using the performance abstractions of the Palladio Component Model.The certification process guarantees the conformance of a component implementationto its specification regarding performance properties, without having toexpose the source code of the product to a certification authority. Instead,the provable refinement of an abstract performance specification to the performance description of the implementation, together with evidence that the performance description reflects the propertiesof the component implementation, yields the certification seal.The refinement steps are described as Prolog rules so that the validity ofrefinement between two performance descriptions can be checked automatically.
[12] G. Dritschler, Robert Vaupel, G. Vater, and P. Yocom. Method for Controlling the Number of Servers in a Hierarchical Resource Environment. Patent No. 7734676, United States, June 2010. [ bib ]
[13] Zoya Durdik. Architectural modeling in agile methods. In Proceedings of the Fifteenth International Workshop on Component-Oriented Programming (WCOP) 2010, Barbora Bühnová, Ralf H. Reussner, Clemens Szyperski, and Wolfgang Weck, editors, June 2010, volume 2010-14 of Interne Berichte, pages 23-30. Karlsruhe Institue of Technology, Faculty of Informatics, Karlsruhe, Germany. June 2010, CompArch Young Investigator Award . [ bib | http | Abstract ]
Agile methods and architectural modelling havebeen considered to be mutually exclusive. On the one hand, agilemethods try to reduce overheads by avoiding activities that donot directly contribute to the immediate needs of the currentproject. This often leads to bad cross-project reuse. On the otherhand, architectural modelling is considered a pre-requisite forthe systematic cross-project reuse and for the resulting increasein software developer productivity. In this paper, I discuss therelationship between agile methods and architectural modellingand propose a novel process for agile architectural modelling,which drives requirements elicitation through the use of patternsand components. This process is in-line with agile principles andis illustrated on an example application.
[14] Jörg Henss. Performance prediction for highly distributed systems. In Proceedings of the Fifteenth International Workshop on Component-Oriented Programming (WCOP) 2010, Barbora Bühnová, Ralf H. Reussner, Clemens Szyperski, and Wolfgang Weck, editors, June 2010, volume 2010-14 of Interne Berichte, pages 39-46. Karlsruhe Institue of Technology, Faculty of Informatics, Karlsruhe, Germany. June 2010. [ bib | http | Abstract ]
Currently more and more highly distributed systems emerge, ranging from classic client-server architectures to peer-to-peer systems. With the vast introduction of cloud computing this trend has even accelerated. Single software services are relocated to remote server farms. The communication with the services has to use uncertain network connections over the internet. Performance of such distributed systems is not easy to predict as many performance relevant factors, including network performance impacts, have to be considered. Current software performance prediction approaches, based on analytical and simulative methods, lack the support for detailed network models. Hence an integrated software and network performance prediction is required. In this paper general techniques for the model integration of differently targeted simulation domains are presented. At plus design alternatives for the coupling of simulation frameworks are discussed. Finally this paper presents a model driven approach for an integrated simulation of software and network aspects, based on the palladio component model and the OMNeT++ simulation framework.
[15] Matthias Huber. Towards secure services in an untrusted environment. In Proceedings of the Fifteenth International Workshop on Component-Oriented Programming (WCOP) 2010, Barbora Bühnová, Ralf H. Reussner, Clemens Szyperski, and Wolfgang Weck, editors, June 2010, volume 2010-14 of Interne Berichte, pages 39-46. Karlsruhe Institue of Technology, Faculty of Informatics, Karlsruhe, Germany. June 2010. [ bib | http | Abstract ]
Software services offer many opportunities like reducedcost for IT infrastructure. However, they also introducenew risks, for example losing control over data. While data canbe secured against external threats using standard techniques, theservice providers themselves have to be trusted to ensure privacy.Cryptographic methods combined with architectures adjustedto the client's protection requirements offer promising methodsto build services with a provable amount of security againstinternal adversaries without the need to fully trust the serviceprovider. We propose a reference architecture which separatesservices, restricts privilege of the parts and deploys them ondifferent servers. Assumptions about the servers' and adversary'scapabilities yield security guarantees which are weaker thanclassical cryptographic guarantees, yet can be sufficient.
[16] Dennis Westermann and Jens Happe. Towards performance prediction of large enterprise applications based on systematic measurements. In Proceedings of the Fifteenth International Workshop on Component-Oriented Programming (WCOP) 2010, Barbora Bühnová, Ralf H. Reussner, Clemens Szyperski, and Wolfgang Weck, editors, June 2010, volume 2010-14 of Interne Berichte, pages 71-78. Karlsruhe Institue of Technology, Faculty of Informatics, Karlsruhe, Germany. June 2010. [ bib | http | Abstract ]
Understanding the performance characteristics of enterprise applications, such as response time, throughput, and resource utilization, is crucial for satisfying customer expectations and minimizing costs of application hosting. Enterprise applications are usually based on a large set of existing software (e.g. middleware, legacy applications, and third party services). Furthermore, they continuously evolve due to changing market requirements and short innovation cycles. Software performance engineering in its essence is not directly applicable to such scenarios. Many approaches focus on early lifecycle phases assuming that a software system is built from scratch and all its details are known. These approaches neglect influences of already existing middleware, legacy applications, and third party services. For performance prediction, detailed information about the internal structure of the systems is necessary. However, such information may not be available or accessible due to the complexity of existing software. In this paper, we propose a combined approach of model based and measurement based performance evaluation techniques to handle the complexity of large enterprise applications. We outline open research questions that have to be answered in order to put performance engineering in industrial practice. For validation, we plan to apply our approach to different real-world scenarios that involve current SAP enterprise solutions such as SAP Business ByDesign and the SAP Business Suite.
[17] Konstantin Bender. Automated Performance Model Extraction of Enterprise Data Fabrics. Master's thesis, Karlsruhe Institute of Technology, Karlsruhe, Germany, Karlsruhe, Germany, May 2010. [ bib ]
[18] Nikolaus Huber, Steffen Becker, Christoph Rathfelder, Jochen Schweflinghaus, and Ralf Reussner. Performance Modeling in Industry: A Case Study on Storage Virtualization. In ACM/IEEE 32nd International Conference on Software Engineering (ICSE 2010), Software Engineering in Practice Track, Cape Town, South Africa, May 2-8, 2010, pages 1-10. ACM, New York, NY, USA. May 2010, Acceptance Rate (Full Paper): 23% (16/71). [ bib | DOI | slides | .pdf | Abstract ]
In software engineering, performance and the integration of performance analysis methodologies gain increasing importance, especially for complex systems. Well-developed methods and tools can predict non-functional performance properties like response time or resource utilization in early design stages, thus promising time and cost savings. However, as performance modeling and performance prediction is still a young research area, the methods are not yet well-established and in wide-spread industrial use. This work is a case study of the applicability of the Palladio Component Model as a performance prediction method in an industrial environment. We model and analyze different design alternatives for storage virtualization on an IBM (Trademark of IBM in USA and/or other countries) system. The model calibration, validation and evaluation is based on data measured on a System z9 (Trademark of IBM in USA and/or other countries) as a proof of concept. The results show that performance predictions can identify performance bottlenecks and evaluate design alternatives in early stages of system development. The experiences gained were that performance modeling helps to understand and analyze a system. Hence, this case study substantiates that performance modeling is applicable in industry and a valuable method for evaluating design decisions.
[19] Rouven Krebs and Christian Hochwarth. Method and system for managing learning materials presented offline. Patent US 94886, April 2010. [ bib ]
[20] Kai Sachs, Stefan Appel, Samuel Kounev, and Alejandro Buchmann. Benchmarking Publish/Subscribe-based Messaging Systems. In Proc. of 2nd International Workshop on Benchmarking of Database Management Systems and Data-Oriented Web Technologies (BenchmarX'10)., Martin Necasky and Eric Pardede, editors, April 2010, volume 6193 of Lecture Notes in Computer Science (LNCS). Springer. April 2010. [ bib | .pdf ]
[21] Erik Burger and Boris Gruschko. A Change Metamodel for the Evolution of MOF-Based Metamodels. In Proceedings of Modellierung 2010, Gregor Engels, Dimitris Karagiannis, and Heinrich C. Mayr, editors, Klagenfurt, Austria, March 26, 2010, volume P-161 of GI-LNI, pages 285-300. [ bib | slides | .pdf | Abstract ]
The evolution of software systems often produces incompatibilities with existing data and applications. To prevent incompatibilities, changes have to be well-planned, and developers should know the impact of changes on a software system. This consideration also applies to the field of model-driven development, where changes occur with the modification of the underlying metamodels. Models that are instantiated from an earlier metamodel version may not be valid instances of the new version of a metamodel. In contrast to other metamodeling standards like the Eclipse Modeling Framework (EMF), no classification of metamodel changes has been performed yet for the Meta Object Facility (MOF).The contribution of this paper is the evaluation of the impact of metamodel changes on models. For the formalisation of changes to MOF-based metamodels, a ChangeMetamodel is introduced to describe the transformation of one version of a metamodel to another. The changes are then classifed by their impact on the compatibility to existing model data. The classification is formalised using OCL constraints. The ChangeMetamodel and the change classifications presented in this paper lay the foundation for the implemention of a mechanism that allows metamodel editors to estimate the impact of metamodel changes semi-automatically.
[22] Rouven Krebs. Method and system for for an adaptive learning strategy. Patent US 70443, March 2010. [ bib ]
[23] Andreas Rentschler. Entwurf einer grafischen, domänenspezifischen Modellierungssprache für ein filterbasiertes Datenanalyseframework. Diploma thesis, Karlsruhe Institute of Technology, Germany, March 2010. [ bib | .pdf ]
[24] Thomas Schuster, Christoph Rathfelder, Nelly Schuster, and Jens Nimis. Comprehensive tool support for iterative soa evolution. In Proceedings of the International Workshop on SOA Migration and Evolution 2010 (SOAME 2010) as part of the 14th European Conference on Software Maintenance and Reengineering (CSMR 2010), March 15, 2010, pages 1-10. [ bib | .pdf | Abstract ]
In recent years continuously changing market situations required IT systems that are flexible and highly responsive to changes of the underlying business processes. The transformation to service-oriented architecture (SOA) concepts, mainly services and loose coupling, promises to meet these demands. However, elevated complexity in management and evolution processes is required for the migration of existing systems towards SOA. Studies in this area of research have revealed a gap between in continuous and actual tool support of development teams throughout the process phases of evolution processes. Thus, in this article we introduce a method that fosters evolution by an iterative approach and illustrate how each phase of this method can be tool-supported.
[25] Michael Kuperberg and Fouad Omri. Automated Benchmarking of Java APIs. In Proceedings of Software Engineering 2010 (SE2010), February 2010. [ bib | .pdf | Abstract ]
Performance is an extra-functional property of software systems which is often critical for achieving sufficient scalability or efficient resource utilisation. As many applications are built using application programmer interfaces (APIs) of execution platforms and external components, the performance of the used API implementations has a strong impact on the performance of the application itself. Yet the sheer size and complexity of today's APIs make it hard to manually benchmark them, while many semantical constraints and requirements (on method parameters, etc.) make it complicated to automate the creation of API benchmarks. Benchmarking the whole API is necessary since it is in the majority of the cases hard to exactly specify which parts of the API would be used by a given application. Additionally, modern execution platforms such as the Java Virtual Machine perform extensive nondeterministic runtime optimisations, which need to be considered and quantified for realistic benchmarking. In this paper, we present an automated solution for benchmarking any large APIs that are written in the Java programming language, not just the Java Platform API. Our implementation induces the optimisations of the Just-In-Time compiler to obtain realistic benchmarking results. We evaluate the approach on a large subset of the Java Platform API exposed by the base libraries of the Java Virtual Machine.
[26] Steffen Becker, Michael Hauck, Mircea Trifu, Klaus Krogmann, and Jan Kofron. Reverse Engineering Component Models for Quality Predictions. In Proceedings of the 14th European Conference on Software Maintenance and Reengineering, European Projects Track, 2010, pages 199-202. IEEE. 2010. [ bib | .pdf | Abstract ]
Legacy applications are still widely spread. If a need to change deployment or update its functionality arises, it becomes difficult to estimate the performance impact of such modifications due to absence of corresponding models. In this paper, we present an extendable integrated environment based on Eclipse developed in the scope of the Q-ImPrESS project for reverse engineering of legacy applications (in C/C++/Java). The Q-ImPrESS project aims at modeling quality attributes at an architectural level and allows for choosing the most suitable variant for implementation of a desired modification. The main contributions of the project include i) a high integration of all steps of the entire process into a single tool, a beta version of which has been already successfully tested on a case study, ii) integration of multiple research approaches to performance modeling, and iii) an extendable underlying meta-model for different quality dimensions.
[27] Franz Brosch, Ralf Gitzel, Heiko Koziolek, and Simone Krug. Combining architecture-based software reliability predictions with financial impact calculations. In International Workshop on Formal Engineering approaches to Software Components and Architectures (FESCA), 2010, volume 264 of ENTCS, pages 3-17. Elsevier. 2010. [ bib | DOI | .pdf | Abstract ]
Software failures can lead to substantial costs for the user. Existing models for software reliability prediction do not provide much insight into this financial impact. Our approach presents a first step towards the integration of reliability prediction from the IT perspective and the business perspective. We show that failure impact should be taken into account not only at their date of occurrence but already in the design stage of the development. First we model cost relevant business processes as well as the associated IT layerand then connect them to failure probabilities. Based on this we conduct a reliability and cost estimation. The method is illustrated by a case study.
[28] Franz Brosch, Heiko Koziolek, Barbora Buhnova, and Ralf Reussner. Parameterized Reliability Prediction for Component-based Software Architectures. In International Conference on the Quality of Software Architectures (QoSA), 2010, volume 6093 of LNCS, pages 36-51. Springer. 2010. [ bib | DOI | .pdf | Abstract ]
Critical properties of software systems, such as reliability, should be considered early in the development, when they can govern crucial architectural design decisions. A number of design-time reliability-analysis methods has been developed to support this task. However, the methods are often based on very low-level formalisms, and the connection to different architectural aspects (e.g., the system usage profile) is either hidden in the constructs of a formal model (e.g., transition probabilities of a Markov chain), or even neglected (e.g., resource availability). This strongly limits the applicability of the methods to effectively support architectural design. Our approach, based on the Palladio Component Model (PCM), integrates the reliability-relevant architectural aspects in a highly parameterized UML-like model, which allows for transparent evaluation of architectural design options. It covers the propagation of the system usage profile throughout the architecture, and the impact of the execution environment, which are neglected in most of the existing approaches. Before analysis, the model is automatically transformed into a formal Markov model in order to support effective analytical techniques to be employed. The approach has been validated against a reliability simulation of a distributed Business Reporting System.
[29] Manfred Broy and Ralf Reussner. Architectural concepts in programming languages. Computer, 43:88-91, 2010, IEEE Computer Society, Los Alamitos, CA, USA. [ bib | DOI | .pdf ]
[30] Vittorio Cortellessa, Anne Martens, Ralf Reussner, and Catia Trubiani. A process to effectively identify guilty performance antipatterns. In Fundamental Approaches to Software Engineering, 13th International Conference, FASE 2010, David Rosenblum and Gabriele Taentzer, editors, Paphos, Cyprus, 2010, pages 368-382. Springer-Verlag Berlin Heidelberg. 2010. [ bib | DOI | http | .pdf | Abstract ]
The problem of interpreting the results of software performance analysis is very critical. Software developers expect feedbacks in terms of architectural design alternatives (e.g., split a software component in two components and re-deploy one of them), whereas the results of performance analysis are either pure numbers (e.g. mean values) or functions (e.g. probability distributions). Support to the interpretation of such results that helps to fill the gap between numbers/functions and software alternatives is still lacking. Performance antipatterns can play a key role in the search of performance problems and in the formulation of their solutions. In this paper we tackle the problem of identifying, among a set of detected performance antipatterns, the ones that are the real causes of problems (i.e. the guilty ones). To this goal we introduce a process to elaborate the performance analysis results and to score performance requirements, model entities and performance antipatterns. The cross observation of such scores allows to classify the level of guiltiness of each antipattern. An example modeled in Palladio is provided to demonstrate the validity of our approach by comparing the performance improvements obtained after removal of differently scored antipatterns.
[31] Frank Eichinger, Klaus Krogmann, Roland Klug, and Klemens Böhm. Software-Defect Localisation by Mining Dataflow-Enabled Call Graphs. In Proceedings of the 10th European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD), 2010. Barcelona, Spain. [ bib | http | Abstract ]
Defect localisation is essential in software engineering and is an important task in domain-specific data mining. Existing techniques building on call-graph mining can localise different kinds of defects. However, these techniques focus on defects that affect the control flow and are agnostic regarding the data flow. In this paper, we introduce data flow enabled call graphs that incorporate abstractions of the data flow. Building on these graphs, we present an approach for defect localisation. The creation of the graphs and the defect localisation are essentially data mining problems, making use of discretisation, frequent subgraph mining and feature selection. We demonstrate the defect-localisation qualities of our approach with a study on defects introduced into Weka. As a result, defect localisation now works much better, and a developer has to investigate on average only 1.5 out of 30 methods to fix a defect.
[32] Thomas Goldschmidt, Steffen Becker, and Axel Uhl. Incremental Updates for Textual Modeling of Large Scale Models. In Proceedings of the 15th IEEE International Conference on Engineering of Complex Computer Systems (ICECCS 2010) - Poster Paper, 2010. IEEE. 2010. [ bib | Abstract ]
Model-Driven Engineering (MDE) aims at improving the development of complex computer systems. Within this context textual concrete syntaxes for models are beneficial for many reasons. They foster usability and productivity because of their fast editing style, their usage of error markers, autocompletion and quick fixes. Several frameworks and tools from different communities for creating concrete textual syntaxes for models emerged during recent years. However, there are still cases where no solution has been published yet. Open issues are incremental parsing and model updating as well as partial and federated views. On the other hand incremental parsing and the handling of abstract syntaxes as leading entities has been investigated within the compiler construction communities many years ago. In this paper we present an approach for concrete textual syntaxes that makes use of incremental parsing and transformation techniques. Thus, we circumvent problems that occur when dealing with concrete textual syntaxes in a UUID based environment including multiple partial and federated views. We validated our approach using a proof of concept implementation including a case study.
[33] Henning Groenda. Usage profile and platform independent automated validation of service behavior specifications. In Proceedings of the 2nd International Workshop on the Quality of Service-Oriented Software Systems, Oslo, Norway, 2010, QUASOSS '10, pages 6:1-6:6. ACM, New York, NY, USA. 2010. [ bib | DOI | http | Abstract ]
Assessing providable service levels based on model-driven prediction approaches requires valid service behavior specifications. Such specifications must be suitable for the requested usage profile and available hardware to make correct predictions and decisions on providable service levels. Assessing the validity of given parameterized performance specifications is often done manually in an ad-hoc way based on the experience of the performance engineer. In this paper, we show how model-based testing can be applied to validate a specification's accuracy and how the attachment of validation settings to specifications can ease validity assessments. The applicability of the approach is shown on a case study. We demonstrate how our approach allows usage profile and platform independent performance validations, as well as point out how validity assessments are eased.
[34] Jens Happe, Henning Groenda, Michael Hauck, and Ralf H. Reussner. A Prediction Model for Software Performance in Symmetric Multiprocessing Environments. In Proceedings of the 2010 7th International Conference on the Quantitative Evaluation of Systems, 2010, QEST '10, pages 59-68. IEEE Computer Society, Washington, DC, USA. 2010. [ bib | DOI | http | .pdf | Abstract ]
The broad introduction of multi-core processors made symmetric multiprocessing (SMP) environments mainstream. The additional cores can significantly increase software performance. However, their actual benefit depends on the operating system scheduler's capabilities, the system's workload, and the software's degree of concurrency. The load distribution on the available processors (or cores) strongly influences response times and throughput of software applications. Hence, understanding the operating system scheduler's influence on performance and scalability is essential for the accurate prediction of software performance (response time, throughput, and resource utilisation). Existing prediction approaches tend to approximate the influence of operating system schedulers by abstract policies such as processor sharing and its more sophisticated extensions. However, these abstractions often fail to accurately capture software performance in SMP environments. In this paper, we present a performance Model for general-purpose Operating System Schedulers (MOSS). It allows analyses of software performance taking the influences of schedulers in SMP environments into account. The model is defined in terms of timed Coloured Petri Nets and predicts the effect of different operating system schedulers (e.g., Windows 7, Vista, Server 2003, and Linux 2.6) on software performance. We validated the prediction accuracy of MOSS in a case study using a business information system. In our experiments, the deviation of predictions and measurements was below 10% in most cases and did not exceed 30%.
[35] Jens Happe, Dennis Westermann, Kai Sachs, and Lucia Kapova. Statistical Inference of Software Performance Models for Parametric Performance Completions. In Research into Practice - Reality and Gaps (Proceedings of QoSA 2010), George Heineman, Jan Kofron, and Frantisek Plasil, editors, 2010, volume 6093 of Lecture Notes in Computer Science (LNCS), pages 20-35. Springer. 2010. [ bib | .pdf | Abstract ]
Software performance engineering (SPE) enables software architects to ensure high performance standards for their applications. However, applying SPE in practice is still challenging. Most enterprise applications include a large software basis, such as middleware and legacy systems. In many cases, the software basis is the determining factor of the system's overall timing behavior, throughput, and resource utilization. To capture these influences on the overall system's performance, established performance prediction methods (modelbased and analytical) rely on models that describe the performance-relevant aspects of the system under study. Creating such models requires detailed knowledge on the system's structure and behavior that, in most cases, is not available. In this paper, we abstract from the internal structure of the system under study. We focus our efforts on message-oriented middleware and analyze the dependency between the MOM's usage and its performance. We use statistical inference to conclude these dependencies from observations. For ActiveMQ 5.3, the resulting functions predict the performance with an relative mean square error 0.1.
[36] Michael Hauck, Jens Happe, and Ralf H. Reussner. Automatic Derivation of Performance Prediction Models for Load-balancing Properties Based on Goal-oriented Measurements. In Proceedings of the 18th IEEE International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems (MASCOTS'10), 2010, pages 361-369. IEEE Computer Society. 2010. [ bib | DOI | http | Abstract ]
In symmetric multiprocessing environments, the performance of a software system heavily depends on the application's parallelism, the scheduling and load-balancing policies of the operating system, and the infrastructure it is running on. The scheduling of tasks can influence the response time of an application by several orders of magnitude. Thus, detailed models of the operating system scheduler are essential for accurate performance predictions. However, building such models for schedulers and including them into performance prediction models involves a lot of effort. For this reason, simplified scheduler models are used for the performance evaluation of business information systems in general. In this work, we present an approach to derive load-balancing properties of general-purpose operating system (GPOS) schedulers automatically. Our approach uses goal-oriented measurements to derive performance models based on observations. Furthermore, the derived performance model is plugged into the Palladio Component Model (PCM), a model-based performance prediction approach. We validated the applicability of the approach and its prediction accuracy in a case study on different operating systems.
[37] Michael Hauck, Matthias Huber, Markus Klems, Samuel Kounev, Jörn Müller-Quade, Alexander Pretschner, Ralf Reussner, and Stefan Tai. Challenges and Opportunities of Cloud Computing - Trade-off Decisions in Cloud Computing Architecture. Technical Report 2010-19, Karlsruhe Institue of Technology, Faculty of Informatics, 2010. [ bib | http ]
[38] Clemens Heidinger, Erik Buchmann, Matthias Huber, Klemens Böhm, and Jörn Müller-Quade. Privacy-aware folksonomies. In ECDL, 2010, pages 156-167. [ bib ]
[39] Christian Köllner, Georg Dummer, Andreas Rentschler, and K.D. Müller-Glaser. Designing a Graphical Domain-Specific Modelling Language Targeting a Filter-Based Data Analysis Framework. In 13th IEEE International Symposium on Object/Component/Service-Oriented Real-Time Distributed Computing Workshops (ISORCW '10), 2010, pages 152-157. IEEE Computer Society, Los Alamitos, CA, USA. 2010. [ bib | DOI | http | .pdf ]
[40] Martin Kapa and Lucia Kapova. User experience sensitivity analysis guided by videostreaming quality attributes. In Third Joint IFIP Wireless and Mobile Networking Conference (WMNC'2010), 2010. Budapest, Hungary. [ bib | Abstract ]
Increasing of the user requirements on video quality is essential to consider and have in mind while designing any video-providing services. The methods in the user-centered design of services are fairly labor intensive and have to consider resulting user experience. User experience is a term that is very hard to be defined. There are different approaches to user experience assessment. However, they lack a methods to predict expected user experience based on user's subjective point of view. We propose a method of User Experience Sensitivity Analysis to find dependency of user experience on quality attributes of the service and define initial prediction model. Validation of our approach is provided by comparison between the observed values of real user experience and prediction results.
[41] Lucia Kapova and Steffen Becker. Systematic refinement of performance models for concurrent component-based systems. In 7th International Workshop on Formal Engineering approaches to Software Components and Architectures (FESCA), 2010, Electronic Notes in Theoretical Computer Science. Elsevier. 2010. [ bib | .pdf | Abstract ]
Model-driven performance prediction methods require detailed design models to evaluate the performance of software systems during early development stages. However, the complexity of detailed prediction models and the semantic gap between modelled performance concerns and functional concerns prevents many developers to address performance. As a solution to this problem, systematic model refinements, called completions, hide low-level details from developers. Completions automatically integrate performance-relevant details into component-based architectures using model-to-model transformations. In such scenarios, conflicts between different completions are likely. Therefore, the application order of completions must be determined unambiguously in order to reduce such conflicts. Many existing approaches employ the concept of performance completions to include performance-relevant details to the prediction model. So far researcher only address the application of a single completion on an architectural model. The reduction of conflicting completions have not yet been considered. In this paper, we present a systematic approach to reduce and avoid conflicts between completions that are applied to the same model. The method presented in this paper is essential for the automated integration of completions in software performance engineering. Furthermore, we apply our approach to reduce conflicts of a set of completions based on design patterns for concurrent software systems.
[42] Lucia Kapova and Barbora Buhnova. Performance-driven stepwise refinement of component-based architectures. In QUASOSS '10: Proceedings of the 2nd International Workshop on the Quality of Service-Oriented Software Systems, Oslo, Norway, 2010, pages 1-7. ACM, New York, NY, USA. 2010. [ bib | DOI ]
[43] Lucia Kapova, Thomas Goldschmidt, Steffen Becker, and Joerg Henss. Evaluating Maintainability with Code Metrics for Model-to-Model Transformations. In Research into Practice - Reality and Gaps (Proceeding of QoSA 2010), George Heineman, Jan Kofron, and Frantisek Plasil, editors, 2010, volume 6093 of LNCS, pages 151-166. Springer-Verlag Berlin Heidelberg. 2010. [ bib | .pdf | Abstract ]
Using model-to-model transformations to generate analysis models or code from architecture models is sought to promote compliance and reuse of components. The maintainability of transformations is influenced by various characteristics - as with every programming language artifact. Code metrics are often used to estimate code maintainability. However, most of the established metrics do not apply to declarative transformation languages (such as QVT Relations) since they focus on imperative (e.g. object-oriented) coding styles. One way to characterize the maintainability of programs are code metrics. However, the vast majority of these metrics focus on imperative (e.g., object-oriented) coding styles and thus cannot be reused as-is for transformations written in declarative languages. In this paper we propose an initial set of quality metrics to evaluate transformations written in the declarative QVT Relations language.We apply the presented set of metrics to several reference transformations to demonstrate how to judge transformation maintainability based on our metrics.
[44] Lucia Kapova, Thomas Goldschmidt, Jens Happe, and Ralf H. Reussner. Domain-specific templates for refinement transformations. In MDI '10: Proceedings of the First International Workshop on Model-Drive Interoperability, Oslo, Norway, 2010, pages 69-78. ACM, New York, NY, USA. 2010. [ bib | DOI ]
[45] Lucia Kapova and Ralf Reussner. Application of advanced model-driven techniques in performance engineering. In Computer Performance Engineering, Alessandro Aldini, Marco Bernardo, Luciano Bononi, and Vittorio Cortellessa, editors, 2010, volume 6342 of Lecture Notes in Computer Science, pages 17-36. Springer Berlin / Heidelberg. 2010, 10.1007/978-3-642-15784-4_2. [ bib | http ]
[46] Lucia Kapova, Barbora Zimmerova, Anne Martens, Jens Happe, and Ralf H. Reussner. State dependence in performance evaluation of component-based software systems. In Proceedings of the 1st Joint WOSP/SIPEW International Conference on Performance Engineering (WOSP/SIPEW '10), San Jose, California, USA, 2010, pages 37-48. ACM, New York, NY, USA. 2010. [ bib | DOI | .pdf | Abstract ]
Performance prediction and measurement approaches for component-based software systems help software architects to evaluate their systems based on component performance specifications created by component developers. Integrating classical performance models such as queueing networks, stochastic Petri nets, or stochastic process algebras, these approaches additionally exploit the benefits of component-based software engineering, such as reuse and division of work. Although researchers have proposed many approaches in this direction during the last decade, none of them has attained widespread industrial use. On this basis, we have conducted a comprehensive state-of-the-art survey of more than 20 of these approaches assessing their applicability. We classified the approaches according to the expressiveness of their component performance modelling languages. Our survey helps practitioners to select an appropriate approach and scientists to identify interesting topics for future research.
[47] Benjamin Klatt. Modelling and prediction of event-based communication in component-based architectures. Master's thesis, Karlsruhe Institute of Technology, Germany, 2010. ObjektForum Thesis Award. [ bib | .pdf | Abstract ]
With the increasing demand of large-scale systems and the corresponding high load of data or users, event-based communication has gained increasing attention. Originating from embedded systems and graphical user interfaces, the asynchronous type of communication also provides advantages to business applications by decoupling individual components and their processes respectively. However, the possible scalability gained from the event-based communication can result in performance problems in the overall system which are hard to predict by the software architect. Model-based performance prediction is a reasonable approach to predict system characteristics in general. Todayś solutions are however limited in handling the complexity of the additional infrastructure. Especially the impact of many-to-many and asynchronous connections on the overall system is not considered even by advanced projects such as the Palladio Component Model. This thesis presents an approach to introduce event-based communication in the Palladio Component Model and a transformation to reuse existing prediction techniques. The approach includes an additional automatic integration of an auxiliary repository model. This model encapsulates the characteristics of the underlying middleware infrastructure which is distributed to all event-based connections in the system architecture. An implementation of the approach has been provided as part of this thesis. It was evaluated in a case study based on a trac information and monitoring system installed in the city of Cambridge. Compared to an existing case study of the same system, the new approach reduced the modelling effort for event-based connections by about 80 percentage and provided more exibility to test different setups. In addition, the approach reduced the prediction error to less than 5 percentage in most cases.
[48] Samuel Kounev, Simon Spinner, and Philipp Meier. QPME 2.0 - A Tool for Stochastic Modeling and Analysis Using Queueing Petri Nets. In From Active Data Management to Event-Based Systems and More, Kai Sachs, Ilia Petrov, and Pablo Guerrero, editors, volume 6462 of Lecture Notes in Computer Science, pages 293-311. Springer-Verlag, Berlin, Heidelberg, 2010. 10.1007/978-3-642-17226-7_18. [ bib | http | .pdf | Abstract ]
Queueing Petri nets are a powerful formalism that can be exploited for modeling distributed systems and analyzing their performance and scalability. By combining the modeling power and expressiveness of queueing networks and stochastic Petri nets, queueing Petri nets provide a number of advantages. In this paper, we present Version 2.0 of our tool QPME (Queueing Petri net Modeling Environment) for modeling and analysis of systems using queueing Petri nets. The development of the tool was initiated by Samuel Kounev in 2003 at the Technische Universitä Darmstadt in the group of Prof. Alejandro Buchmann. Since then the tool has been distributed to more than 100 organizations worldwide. QPME provides an Eclipse-based editor for building queueing Petri net models and a powerful simulation engine for analyzing the models. After presenting the tool, we discuss ongoing work on the QPME project and the planned future enhancements of the tool.
[49] Max E. Kramer. Mapping reusable aspect models to aspect-oriented code. Study thesis, Karlsruhe Institute of Technology (KIT), Germany, 2010. [ bib | .pdf ]
[50] Klaus Krogmann. Reconstruction of Software Component Architectures and Behaviour Models using Static and Dynamic Analysis. PhD thesis, Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany, 2010. [ bib | http | .pdf | Abstract ]
Model-based performance prediction systematically deals with the evaluation of software performance to avoid for example bottlenecks, estimate execution environment sizing, or identify scalability limitations for new usage scenarios. Such performance predictions require up-to-date software performance models. Still, no automated reverse engineering approach for software performance models at an architectural level exist. This book describes a new integrated reverse engineering approach for the reconstruction of software component architectures and software component behaviour models which are parameterised over hardware, component assembly, and control and data flow and as such can serve as software performance models due to the execution semantics of the target meta-model.
[51] Klaus Krogmann, Michael Kuperberg, and Ralf Reussner. Using Genetic Search for Reverse Engineering of Parametric Behaviour Models for Performance Prediction. IEEE Transactions on Software Engineering, 36(6):865-877, 2010, IEEE. [ bib | DOI | .pdf | Abstract ]
In component-based software engineering, existing components are often re-used in new applications. Correspondingly, the response time of an entire component-based application can be predicted from the execution durations of individual component services. These execution durations depend on the runtime behaviour of a component, which itself is influenced by three factors: the execution platform, the usage profile, and the component wiring. To cover all relevant combinations of these influencing factors, conventional prediction of response times requires repeated deployment and measurements of component services for all such combinations, incurring a substantial effort. This paper presents a novel comprehensive approach for reverse engineering and performance prediction of components. In it, genetic programming is utilised for reconstructing a behaviour model from monitoring data, runtime bytecode counts and static bytecode analysis. The resulting behaviour model is parametrised over all three performance-influencing factors, which are specified separately. This results in significantly fewer measurements: the behaviour model is reconstructed only once per component service, and one application-independent bytecode benchmark run is sufficient to characterise an execution platform. To predict the execution durations for a concrete platform, our approach combines the behaviour model with platform-specific benchmarking results. We validate our approach by predicting the performance of a file sharing application.
[52] Anne Martens, Danilo Ardagna, Heiko Koziolek, Raffaela Mirandola, and Ralf Reussner. A hybrid approach for multi-attribute QoS optimisation in component based software systems. In Research into Practice - Reality and Gaps (Proceedings of the 6th International Conference on the Quality of Software Architectures, QoSA 2010), George Heineman, Jan Kofron, and Frantisek Plasil, editors, 2010, volume 6093 of Lecture Notes in Computer Science, pages 84-101. Springer-Verlag Berlin Heidelberg. 2010. [ bib | DOI | .pdf | .pdf | Abstract ]
Multiple, often conflicting quality of service (QoS) requirements arise when evaluating design decisions and selecting design alternatives of complex component-based software systems. In this scenario, selecting a good solution with respect to a single quality attribute can lead to unacceptable results with respect to the other quality attributes. A promising way to deal with this problem is to exploit multi-objective optimization where the objectives represent different quality attributes. The aim of these techniques is to devise a set of solutions, each of which assures a trade-off between the conflicting qualities. To automate this task, this paper proposes a combined use of analytical optimization techniques and evolutionary algorithms to efficiently identify a significant set of design alternatives, from which an architecture that best fits the different quality objectives can be selected. The proposed approach can lead both to a reduction of development costs and to an improvement of the quality of the final system. We demonstrate the use of this approach on a simple case study.
[53] Anne Martens, Heiko Koziolek, Steffen Becker, and Ralf H. Reussner. Automatically improve software models for performance, reliability and cost using genetic algorithms. In Proceedings of the first joint WOSP/SIPEW international conference on Performance engineering, Alan Adamson, Andre B. Bondi, Carlos Juiz, and Mark S. Squillante, editors, San Jose, California, USA, 2010, WOSP/SIPEW '10, pages 105-116. ACM, New York, NY, USA. 2010. [ bib | DOI | slides | http | .pdf | Abstract ]
Quantitative prediction of quality properties (i.e. extra-functional properties such as performance, reliability, and cost) of software architectures during design supports a systematic software engineering approach. Designing architectures that exhibit a good trade-off between multiple quality criteria is hard, because even after a functional design has been created, many remaining degrees of freedom in the software architecture span a large, discontinuous design space. In current practice, software architects try to find solutions manually, which is time-consuming, can be error-prone and can lead to suboptimal designs. We propose an automated approach to search the design space for good solutions. Starting with a given initial architectural model, the approach iteratively modifies and evaluates architectural models. Our approach applies a multi-criteria genetic algorithm to software architectures modelled with the Palladio Component Model. It supports quantitative performance, reliability, and cost prediction and can be extended to other quantitative quality criteria of software architectures. We validate the applicability of our approach by applying it to an architecture model of a component-based business information system and analyse its quality criteria trade-offs by automatically investigating more than 1200 alternative design candidates.
[54] Philipp Meier. Automated Transformation of Palladio Component Models to Queueing Petri Nets. Master's thesis, Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany, 2010. FZI Prize "Best Diploma Thesis". [ bib | .pdf ]
[55] Qais Noorshams. Focusing the optimization of software architecture models using non-functional requirements. Master's thesis, Karlsruhe Institute of Technology, Karlsruhe, Germany, 2010. [ bib | .pdf ]
[56] Qais Noorshams, Anne Martens, and Ralf Reussner. Using quality of service bounds for effective multi-objective software architecture optimization. In Proceedings of the 2nd International Workshop on the Quality of Service-Oriented Software Systems (QUASOSS '10), Oslo, Norway, October 4, 2010, 2010, pages 1:1-1:6. ACM, New York, NY, USA. 2010. [ bib | DOI | http | .pdf ]
[57] Marcel von Quast. Automatisierte Performance-Analyse von Virtualisierungsplattformen. Master's thesis, Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany, 2010. [ bib ]
[58] Arnd Schröter, Gero Mühl, Samuel Kounev, Helge Parzyjegla, and Jan Richling. Stochastic Performance Analysis and Capacity Planning of Publish/Subscribe Systems. In 4th ACM International Conference on Distributed Event-Based Systems (DEBS 2010), July 12-15, Cambridge, United Kingdom, 2010. ACM, New York, USA. 2010, Acceptance Rate: 25%. [ bib | .pdf ]
[59] Mircea Trifu. Tool-Supported Identification of Functional Concerns in Object-Oriented Code. PhD thesis, Karlsruhe Institute of Technology, 2010. [ bib ]
[60] Roland Weiss, Heiko Koziolek, Johannes Stammel, and Zoya Durdik. Evolution problems in the context of sustainable industrial software systems. In Proceedings of 2nd Workshop of GI Working Group "Long-living Software Systems" (L2S2), 2010. [ bib | Abstract ]
We identified a set of open issues in the context of software aging and long-living systems with respect to the application domain of industrial automation systems, e.g. process control [7] and SCADA (supervisory control and data acquisition) systems. Existing systems in the automation domain suffer from expensive evolution and maintenance as well as from long release cycles. One of the root causes for this is that longevity was not considered during their construction. Most of the solutions that can be found today are not domain-specific, and tend to focus rather on symptoms than on causes. Therefore, we initiated a research project which has the target to define more clearly what domain-specific longevity means, to survey existing approaches, and to derive methods and techniques for addressing the mentioned problem in the industrial automation domain. In this contribution we present the objectives of this project and outline our state of progress.
[61] Dennis Westermann, Jens Happe, Michael Hauck, and Christian Heupel. The performance cockpit approach: A framework for systematic performance evaluations. In Proceedings of the 36th EUROMICRO Conference on Software Engineering and Advanced Applications (SEAA 2010), 2010, pages 31-38. IEEE Computer Society. 2010. [ bib | .pdf | Abstract ]
Evaluating the performance (timing behavior, throughput, and resource utilization) of a software system becomes more and more challenging as today's enterprise applications are built on a large basis of existing software (e.g. middleware, legacy applications, and third party services). As the performance of a system is affected by multiple factors on each layer of the system, performance analysts require detailed knowledge about the system under test and have to deal with a huge number of tools for benchmarking, monitoring, and analyzing. In practice, performance analysts try to handle the complexity by focusing on certain aspects, tools, or technologies. However, these isolated solutions are inefficient due to the small reuse and knowledge sharing. The Performance Cockpit presented in this paper is a framework that encapsulates knowledge about performance engineering, the system under test, and analyses in a single application by providing a flexible, plug-in based architecture. We demonstrate the value of the framework by means of two different case studies.
[62] Dennis Westermann and Christof Momm. Using software performance curves for dependable and cost-efficient service hosting. In Proceedings of the 2nd International Workshop on the Quality of Service-Oriented Software Systems, Oslo, Norway, 2010, QUASOSS '10, pages 3:1-3:6. ACM, New York, NY, USA. 2010. [ bib | DOI | http | .pdf | Abstract ]
The upcoming business model of providing software as a service (SaaS) bears a lot of challenges to a service provider. On the one hand, service providers have to guarantee a certain quality of service (QoS) and ensure that they adhere to these guarantees at runtime. On the other hand, they have to minimize the total cost of ownership (TCO) of their IT landscape in order to offer competitive prices. The performance of a system is a critical attribute that affects QoS as well as TCO. However, the evaluation of performance characteristics is a complex task. Many existing solutions do not provide the accuracy required for offering dependable guarantees. One major reason for this is that the dependencies between the usage profile (provided by the service consumers) and the performance of the actual system is barely described sufficiently. Software Performance Curves are performance models that are derived by goal-oriented systematic measurements of the actual software service. In this paper, we describe how Software Performance Curves can be derived by a service provider that hosts a multi-tenant system. Moreover, we illustrate how Software Performance Curves can be used to derive feasible performance guarantees, develop pricing functions, and minimize hardware resources.
[63] Victor Pankratius and Samuel Kounev, editors. Emerging Research Directions in Computer Science. Contributions from the Young Informatics Faculty in Karlsruhe, Karlsruhe, Germany, 2010. KIT Scientific Publishing. ISBN: 978-3-86644-508-6. [ bib | http ]
[64] Viliam Simko, Petr Hnerynka, and Tomas Bures. From textual use-cases to component-based applications. In Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing 2010, Roger Lee, Jixin Ma, Liz Bacon, Wencai Du, and Miltos Petridis, editors, 2010, Studies in Computational Intelligence, pages 23-37. Springer Berlin Heidelberg. 2010. [ bib | DOI | http | .pdf | Abstract ]
A common practice to capture functional requirements of a software system is to utilize use-cases, which are textual descriptions of system usage scenarios written in a natural language. Since the substantial information about the system is captured by the use-cases, it comes as a natural idea to generate from these descriptions the implementation of the system (at least partially). However, the fact that the use-cases are in a natural language makes this task extremely difficult. In this paper, we describe a model-driven tool allowing code of a system to be generated from use-cases in plain English. The tool is based on the model-driven development paradigm, which makes it modular and extensible, so as to allow for use-cases in multiple language styles and generation for different component frameworks.
[65] Robert Heinrich and Barbara Paech. Defining the quality of business processes. In Modellierung, 2010, pages 133-148. [ bib | .pdf ]

 TOP

Publications 2009

[1] Martin Küster. Modularization of text-to-model mapping specifications - a feasibility study using scannerless parsing. Master's thesis, Karlsruhe Institute of Technology, November 2009. [ bib ]
[2] Engels Gregor, Goedicke Michael, Goltz Ursula, Rausch Andreas, and Reussner Ralf. Design for future - legacy-probleme von morgen vermeidbar? Informatik-Spektrum, 32(5):393-397, October 2009. [ bib ]
[3] Christoph Rathfelder and Henning Groenda. The Architecture Documentation Maturity Model ADM2. In Proceedings of the 3rd Workshop MDD, SOA und IT-Management (MSI 2009), Oldenburg, Germany, October 6-7, 2009, pages 65-80. GiTO-Verlag, Berlin, Germany. October 2009. [ bib | .pdf | Abstract ]
Today, the architectures of software systems are not stable for their whole lifetime but often adapted driven by business needs. Preserving their quality characteristics beyond each of these changes requires deep knowledge of the requirements and the systems themselves. Proper documentation reduces the risk that knowledge is lost and hence is a base for the system's maintenance in the long-run. However, the influence of architectural documentation on the maintainability of software systems is neglected in current quality assessment methods. They are limited to documentation for anticipated change scenarios and do not provide a general assessment approach. In this paper, we propose a maturity model for architecture documentation. It is shaped relative to growing quality preservation maturity and independent of specific technologies or products. It supports the weighting of necessary effort against reducing long-term risks in the maintenance phase. This allows to take product maintainability requirements into account for selecting an appropriate documentation maturity level.
[4] Axel Busch. Performance Assessment of State-of-the-Art Computing Servers for Scientific Applications. Bachelor's thesis, Fachhochschule Kaiserslautern, Campus Zweibruecken, Amerikastrasse 1, 66482 Zweibruecken, Germany, October 2009. [ bib ]
[5] Axel Busch and Julien Leduc. Evaluation of energy consumption and performance of Intel's Nehalem architecture. openlab report, CERN, October 2009. [ bib | .pdf ]
[6] N. Geoffray, G. Thomas, G. Muller, P. Parrend, S. Frenot, and B. Folliot. I-jvm: une machine virtuelle java pour l'isolation de composants dans osgi. In Conference Francaise sur les Systemes d'Exploitation, September 2009. Toulouse, France. [ bib ]
[7] Samuel Kounev and Kai Sachs. Benchmarking and Performance Modeling of Event-Based Systems. it - Information Technology, 51(5), September 2009, Oldenbourg Wissenschaftsverlag, Munich, Germany. [ bib | Abstract ]
Event-based systems are used increasingly often to build loosely-coupled distributed applications. With their growing popularity and gradual adoption in mission critical areas, the need for novel techniques for benchmarking and performance modeling of event-based systems is increasing. In this article, we provide an overview of the state-of-the-art in this area considering both centralized systems based on message-oriented middleware as well as large-scale distributed publish/subscribe systems. We consider a number of specific techniques for benchmarking and performance modeling, discuss their advantages and disadvantages, and provide references for further information. The techniques we review help to ensure that systems are designed and sized to meet their quality-of-service requirements.
[8] Christoph Rathfelder and Samuel Kounev. Modeling Event-Driven Service-Oriented Systems using the Palladio Component Model. In Proceedings of the 1st International Workshop on the Quality of Service-Oriented Software Systems (QUASOSS 2009), Amsterdam, The Netherlands, August 24-28, 2009, pages 33-38. ACM, New York, USA. August 2009. [ bib | DOI | .pdf | Abstract ]
The use of event-based communication within a Service-Oriented Architecture promises several benefits including more loosely-coupled services and better scalability. However, the loose coupling of services makes it difficult for system developers to estimate the behavior and performance of systems composed of multiple services. Most existing performance prediction techniques for systems using event-based communication require specialized knowledge to build the necessary prediction models. Furthermore, general purpose design-oriented performance models for component-based systems provide limited support for modeling event-based communication. In this paper, we propose an extension of the Palladio Component Model (PCM) that provides natural support for modeling event-based communication. We show how this extension can be exploited to model event-driven service-oriented systems with the aim to evaluate their performance and scalability.
[9] Christoph Rathfelder and Samuel Kounev. Model-based performance prediction for event-driven systems. In Proceedings of the Third ACM International Conference on Distributed Event-Based Systems (DEBS 2009), Nashville, Tennessee, July 6-9, 2009, pages 33:1-33:2. ACM, New York, NY, USA. July 2009. [ bib | DOI | http | .pdf | Abstract ]
The event-driven communication paradigm provides a number of advantages for building loosely coupled distributed systems. However, the loose coupling of components in such systems makes it hard for developers to estimate their behavior and performance under load. Most existing performance prediction techniques for systems using event-driven communication require specialized knowledge to build the necessary prediction models. In this paper, we propose an extension of the Palladio Component Model (PCM) that provides natural support for modeling event-based communication and supports different performance prediction techniques.
[10] Fabian Brosig. Automated Extraction of Palladio Component Models from Running Enterprise Java Applications. Master's thesis, Universität Karlsruhe (TH), Karlsruhe, Germany, June 2009. FZI Prize "Best Diploma Thesis". [ bib ]
[11] Henning Groenda. Certification of software component performance specifications. In Proceedings of the Fourteenth International Workshop on Component-Oriented Programming (WCOP) 2009, East Stroudsburg, PA, USA, June 25, 2009, pages 13-21. [ bib | Abstract ]
In software engineering, performance specifications of components support the successful evolution of complex software systems. Having trustworthy specifications is important to reliably detect unwanted effects of modifications on the performance using prediction techniques before they are experienced in live systems. This is especially important if there is no test system available and a system can't be taken down or replaced in its entirety. Existing approaches neglect stating the quality of specifications at all and hence the quality of the prediction is lowered if the assumption that all used specifications are suitable does not hold. In this paper, we propose a test-based approach to validate performance specifications against deployed component implementations. The validation is used to certify specifications which in turn allow assessing the suitability of specifications for predicting the performance of a software system. A small example shows that the certification approach is applicable and creates trustworthy performance specifications.
[12] Kai Sachs, Samuel Kounev, Stefan Appel, and Alejandro Buchmann. A Performance Test Harness For Publish/Subscribe Middleware. In SIGMETRICS/Performance 2009 International Conference, Seattle, WA, USA, June 15-19, 2009, June 2009. (Demo Paper). [ bib | http | .pdf | Abstract ]
Publish/subscribe is becoming increasingly popular as communication paradigm for loosely-coupled message exchange. It is used as a building block in major new software architectures and technology domains such as Enterprise Service Bus, Enterprise Application Integration, Service-Oriented Architecture and Event-Driven Architecture. The growing adoption of these technologies leads to a strong need for benchmarks and performance evaluation tools in this area. In this demonstration, we present jms2009-PS, a benchmark for publish/subscribe middleware based on the Java Message Service standard interface.
[13] Klaus Krogmann and Ralf Reussner. Reverse Engineering von Software-Komponentenverhalten mittels Genetischer Programmierung. Softwaretechnik-Trends, 29(2):22-24, May 2009. [ bib | .pdf | Abstract ]
Die Verwendung von Komponenten ist ein anerkanntes Prinzip in der Software-Entwicklung. Dabei werden Software-Komponenten zumeist als Black-Boxes aufgefasst, deren Interna vor einem Komponenten-Verwender verborgen sind. Architektur-Analyse- Verfahren zur Vorhersage nicht-funktionaler Eigenschaften erlauben bspw. auf der Architekturebene Dimensionierungsfragestellungen fuer Hardware- / Software-Umgebungen zu beantworten, sowie Skalierbarkeitsanalysen und Was-Waere-Wenn-Szenarien fuer die Erweiterung von Altsystemen durchzufuehren. Dazu benoetigen sie jedoch Informationen ueber Interna (bspw. die Anzahl abgearbeiteter Schleifen oder Aufrufe externer Dienste) von Komponenten. Um an solche Informationen zu gelangen muessen existierende Software-Komponenten analysiert werden. Die benoetigten Informationen ueber das Innere der Komponenten muessen dabei derart rekonstruiert werden, dass sie fuer anschließende Analyseverfahren nicht-funktionaler Eigenschaften genutzt werden koennen. Eine haendische Rekonstruktion solcher Modelle scheitert haeufig an der Groeße der Systeme und ist sehr fehleranfaellig, da konsistente Abstraktionen ueber potentiell tausende Zeilen von Code gefunden werden muessen. Bestehende Verfahren liefern dabei nicht die notwendigen Daten- und Kontrollflussabstraktionen die fuer Analysen und Simulationen benoetigt werden. Der Beitrag dieses Papiers ist ein Reverse Engineering Verfahren fuer Komponentenverhalten. Die daraus resultierenden Modelle (Palladio Komponentenmodell) eignen sich zur Vorhersage von Performanz-Eigenschaften (Antwortzeit, Durchsatz) und damit fuer die oben angefuehrten Fragestellungen. Die aus Quellcode rekonstruierten Modelle umfassen parametrisierten Kontroll- und Datenfluss fuer Software-Komponenten und stellen eine Abstraktion realer Zusammenhänge im Quellcode dar. Das Reverse Engineering Verfahren kombiniert dabei ueber Genetische Programmierung (einer Form von Maschinen Lernen) statische und dynamische Analyseverfahren.
[14] Nikolaus Huber. Performance Modeling of Storage Virtualization. Master's thesis, Universität Karlsruhe (TH), Karlsruhe, Germany, April 2009. GFFT Prize. [ bib ]
[15] Hui Li, Wolfgang Theilmann, and Jens Happe. Sla translation in multi-layered service oriented architectures: Status and challenges. Technical Report 2009-8, Universität Karlsruhe (TH), April 2009. [ bib | Abstract ]
Service-Oriented Architecture (SOA) represents an architectural shift for building business applications based on loosely-coupled services. In a multi- layered SOA environment the exact conditions under which services are to be delivered can be formally specified by Service Level Agreements (SLAs). However, typical SLAs are just specified at the top-level and do not allow ser- vice providers to manage their IT stack accordingly as they have no insight on how top-level SLAs translate to metrics or parameters at the various layers of the IT stack. This paper addresses the research problems in the area of SLA translation, namely, the correlation and mapping of SLA-related metrics and parameters within and across IT layers. We introduce a conceptual frame- work for precise definition and classification of SLA translations in SOA. With such a framework, an in-depth review and analysis of the state of the art is carried out by the category, maturity, and applicability of approaches and methodologies. Furthermore, we discuss the fundamental research chal- lenges to be addressed for turning the vision of holistic and transparent SLA translation into reality.
[16] Franz Brosch and Barbora Zimmerova. Design-Time Reliability Prediction for Software Systems. In International Workshop on Software Quality and Maintainability (SQM), March 2009, pages 70-74. [ bib | .pdf | Abstract ]
Reliability is one of the most critical extra-functional properties of a software system, which needs to be evaluated early in the development process when formal methods and tools can be applied. Though many approaches for reliability prediction exist, not much work has been done in combining different types of failures and system views that influence the reliability. This paper presents an integrated approach to reliability prediction, reflecting failures triggered by both software faults and physical-resource breakdowns, and incorporating detailed information about system control flow governed by user inputs.
[17] Henning Groenda, Christoph Rathfelder, and Ralph Mueller. Best of Eclipse DemoCamps - Ein Erfahrungsbericht vom dritten Karlsruher Eclipse DemoCamp. Eclipse Magazine, 3:8-10, March 2009. [ bib ]
[18] Ramon Nou, Samuel Kounev, Ferran Julia, and Jordi Torres. Autonomic QoS control in enterprise Grid environments using online simulation. Journal of Systems and Software, 82(3):486-502, March 2009, Elsevier Science Publishers B. V., Amsterdam, The Netherlands. [ bib | DOI | http | .pdf | Abstract ]
As Grid Computing increasingly enters the commercial domain, performance and Quality of Service (QoS) issues are becoming a major concern. The inherent complexity, heterogeneity and dynamics of Grid computing environments pose some challenges in managing their capacity to ensure that QoS requirements are continuously met. In this paper, a comprehensive framework for autonomic QoS control in enterprise Grid environments using online simulation is proposed. The paper presents a novel methodology for designing autonomic QoS-aware resource managers that have the capability to predict the performance of the Grid components they manage and allocate resources in such a way that service level agreements are honored. Support for advanced features such as autonomic workload characterization on-the-fly, dynamic deployment of Grid servers on demand, as well as dynamic system reconfiguration after a server failure is provided. The goal is to make the Grid middleware self-configurable and adaptable to changes in the system environment and workload. The approach is subjected to an extensive experimental evaluation in the context of a real-world Grid environment and its effectiveness, practicality and performance are demonstrated.
[19] Pierre Parrend. Enhancing automated detection of vulnerabilities in java components. In Forth International Conference on Availability, Reliability and Security (AReS 2009), March 2009. Fukuoka, Japan. [ bib | Abstract ]
Java-based systems are built from components from various providers that are integrated together. Generic coding best practices are gaining momentum, but no tool is availableso far that guarantees that the interactions between these components are performed in a secure manner. We propose the 'Weak Component Analysis' (WCA) tool, which performs static analysis of the component code to identify exploitable vulnerabilities. Three types of classes can be identified in Java components, that each can be exploited through specific vulnerabilities. Internal classes which are not available for other components can be abused in an indirect manner. Shared classes which are provided by libraries can be abused through class-level vulnerabilities. Shared objects, i.e. instantiated classes, which are made available as local services in Service-oriented Programming platforms such as OSGi, Spring and Guice can be abused through object-level vulnerabilities in addition to class-level vulnerabilities.
[20] Mircea Trifu. Improving the dataflow-based concern identification approach. In Proceedings of the 13-th European Conference on Software Maintenance and Reengineering, March 2009. IEEE. March 2009. [ bib ]
[21] G. Bosch, Robert Vaupel, and S. Wirag. Autonomous Management of System Troughput. Patent No. 7487506, United States, February 2009. [ bib ]
[22] Michael Hauck. Extending Performance-Oriented Resource Modelling in the Palladio Component Model. Diploma thesis, University of Karlsruhe (TH), Germany, February 2009. [ bib | .pdf | Abstract ]
The performance of a software system is strongly in uenced by the execution environment the software runs in. In the Palladio Component Model (PCM), a domain-specific language for modelling component-based software systems, the execution environment must be modelled explicitly as it is needed for performance predictions. However, the current version of the PCM offers only rudimentary support for hardware resource modelling: For instance, it is not possible to distinguish between read and write accesses to a hard disk resource. This thesis develops an enhancement of the PCM meta-model that allows for better predictions based on more sophisticated resource models. The enhancement includes the support for accessing resources through explicit interfaces with distinct services and the integration of resource controllers in the meta-model. To support modelling of infrastructure components such as application servers, this thesis introduces the separation of business interfaces and interfaces for accessing resources or the execution environment. Existing PCM tools have been adapted to support the simulation of PCM instances based on the enhanced meta-model. Additionally, the adapted meta-model has been successfully evaluated in two case studies to show that the extended meta-model has no side e ects on preexisting predictions and also enables scenarios not supported before, such as the modelling of a Java Virtual Machine which processes higher-level resource demands.
[23] Steffen Becker, Heiko Koziolek, and Ralf Reussner. The Palladio component model for model-driven performance prediction. Journal of Systems and Software, 82:3-22, 2009, Elsevier Science Inc. [ bib | DOI | http | Abstract ]
One aim of component-based software engineering (CBSE) is to enable the prediction of extra-functional properties, such as performance and reliability, utilising a well-defined composition theory. Nowadays, such theories and their accompanying prediction methods are still in a maturation stage. Several factors influencing extra-functional properties need additional research to be understood. A special problem in CBSE stems from its specific development process: Software components should be specified and implemented independently from their later context to enable reuse. Thus, extra-functional properties of components need to be specified in a parametric way to take different influencing factors like the hardware platform or the usage profile into account. Our approach uses the Palladio component model (PCM) to specify component-based software architectures in a parametric way. This model offers direct support of the CBSE development process by dividing the model creation among the developer roles. This paper presents our model and a simulation tool based on it, which is capable of making performance predictions. Within a case study, we show that the resulting prediction accuracy is sufficient to support the evaluation of architectural design decisions.
[24] Franz Brosch, Henning Groenda, Lucia Kapova, Klaus Krogmann, Michael Kuperberg, Anne Martens, Pierre Parrend, Ralf Reussner, Johannes Stammel, and Emre Taspolatoglu. Software-industrialisierung. Technical report, Fakultät für Informatik, Universität Karlsruhe, Karlsruhe, 2009. Interner Bericht. [ bib | http | Abstract ]
Die Industrialisierung der Software-Entwicklung ist ein zur Zeit sehr stark diskutiertes Thema. Es geht dabei vor allem um die Efizienzsteigerung durch die Steigerung des Standardisierungsgrades, des Automatisierungsgrades sowie eine Erhöhung der Arbeitsteilung. Dies wirkt sich einerseits auf die den Software- Systemen zu Grunde liegenden Architekturen, aber auch auf die Entwicklungsprozesse aus. So sind Service-orientierte Architekturen ein Beispiel für eine gesteigerte Standardisierung innerhalb von Software-Systemen. Es ist zu berücksichtigen, dass sich die Software-Branche von den klassischen produzierenden Industriezweigen dadurch unterscheidet, dass Software ein immaterielles Produkt ist und so ohne hohe Produktionskosten beliebig oft vervielfältigt werden kann. Trotzdem lassen sich viele Erkenntnisse aus den klassischen Industriezweigen auf die Software-Technik übertragen. Die Inhalte dieses Berichts stammen hauptsächlich aus dem Seminar " Software- Industrialisierung welches sich mit der Professionalisierung der Software- Entwi- cklung und des Software-Entwurfs beschäftigte. Während die klassische Software-Entwicklung wenig strukturiert ist und weder im Bezug auf Reproduzierbarkeit oder Qualitätssicherung erhöhten Anforderungen genügt, befindet sich die Software-Entwick- lung im Rahmen der Industrialisierung in einemWandel. Dazu zählen arbeitsteiliges Arbeiten, die Einführung von Entwicklungsprozessen mit vorhersagbaren Eigenschaften (Kosten, Zeitbedarf, ...), und in der Folge die Erstellung von Produkten mit garantierbaren Eigenschaften. Das Themenspektrum des Seminars umfasste dabei unter anderem: * Komponentenbasierte Software-Architekturen * Modellgetriebene Softwareentwicklung: Konzepte und Technologien * Industrielle Softwareentwicklungsprozesse und deren Bewertung Das Seminar wurde wie eine wissenschaftliche Konferenz organisiert: Die Einreichungen wurden in einem zweistufigen Peer-Review-Verfahren begutachtet. In der ersten Stufe wurde eine Begutachtung der studentischen Arbeiten durch Kommilitonen durchgeführt, in der zweiten Stufe eine Begutachtung durch die Betreuer. In verschiedenen Sessions wurden die Artikel wie bei einer Konferenz präsentiert. Die besten Beiträge wurden durch zwei Best Paper Awards ausgezeichnet. Diese gingen an Tom Beyer für seine Arbeit Realoptionen für Entscheidungen in der Software-Entwicklung, sowie an Philipp Meier für seine Arbeit Assessment Methods for Software Product Lines. Ergänzt wurden die Vorträge der Seminarteilnehmer durch zwei eingeladene Vorträge: Collin Rogowski von der 1&1 Internet AG stellte den agilen Softwareentwicklungsprozess beim Mail-Produkt GMX.COM vor. Heiko Koziolek, Wolfgang Mahnke und Michaela Saeftel von ABB referierten über das Thema Software Product Line Engineering anhand der bei ABB entwickelten Robotik-Applikationen.
[25] Fabian Brosig, Samuel Kounev, and Klaus Krogmann. Automated Extraction of Palladio Component Models from Running Enterprise Java Applications. In Proceedings of the 1st International Workshop on Run-time mOdels for Self-managing Systems and Applications (ROSSA 2009). In conjunction with the Fourth International Conference on Performance Evaluation Methodologies and Tools (VALUETOOLS 2009), Pisa, Italy, 2009, pages 10:1-10:10. ACM, New York, NY, USA. 2009. [ bib | .pdf | Abstract ]
Nowadays, software systems have to fulfill increasingly stringent requirements for performance and scalability. To ensure that a system meets its performance requirements during operation, the ability to predict its performance under different configurations and workloads is essential. Most performance analysis tools currently used in industry focus on monitoring the current system state. They provide low-level monitoring data without any performance prediction capabilities. For performance prediction, performance models are normally required. However, building predictive performance models manually requires a lot of time and effort. In this paper, we present a method for automated extraction of performance models of Java EE applications, based on monitoring data collected during operation. We extract instances of the Palladio Component Model (PCM) - a performance meta-model targeted at component-based systems. We evaluate the model extraction method in the context of a case study with a real-world enterprise application. Even though the extraction requires some manual intervention, the case study demonstrates that the existing gap between low-level monitoring data and high-level performance models can be closed.
[26] Fabian Brosig, Samuel Kounev, and Charles Paclat. Using WebLogic Diagnostics Framework to Enable Performance Prediction for Java EE Applications. Oracle Technology Network (OTN) Article, 2009. [ bib | .html | Abstract ]
Throughout the system life cycle, the ability to predict a software system's performance under different configurations and workloads is highly valuable to ensure that the system meets its performance requirements. During the design phase, performance prediction helps to evaluate different design alternatives. At deployment time, it facilitates system sizing and capacity planning. During operation, predicting the effect of changes in the workload or in the system configuration is beneficial for run-time performance management. The alternative to performance prediction is to deploy the system in an environment reflecting the configuration of interest and conduct experiments measuring the system performance under the respective workloads. Such experiments, however, are normally very expensive and time-consuming and therefore often considered not to be economically viable. To enable performance prediction we need an abstraction of the real system that incorporates performance-relevant data, i.e., a performance model. Based on such a model, performance analysis can be carried out. Unfortunately, building predictive performance models manually requires a lot of time and effort. The model must be designed to reflect the abstract system structure and capture its performance-relevant aspects. In addition, model parameters like resource demands or system configuration parameters have to be determined. Given the costs of building performance models, techniques for automatic extraction of models based on observation of the system at run-time are highly desirable. During system development, such models can be exploited to evaluate the performance of system prototypes. During operation, an automatically extracted performance model can be applied for efficient and performance-aware resource management. For example, if one observes an increased user workload and assumes a steady workload growth rate, performance predictions help to determine when the system would reach its saturation point. This way, system operators can react to the changing workload before the system has failed to meet its performance objectives thus avoiding a violation of service level agreements (SLAs). Current performance analysis tools used in industry mostly focus on profiling and monitoring transaction response times and resource consumption. The tools often provide large amounts of low level data while important information needed for building performance models is missing, e.g., the resource demands of individual components. In this article, we present a method for automated extraction of performance models for Java EE applications during operation. We implemented the method in a tool prototype and evaluated its effectiveness in the context of a case study with an early prototype of the SPECjEnterprise2009 benchmark application which in the following we will refer to as SPECjEnterprise2009_pre. (SPECjEnterprise2009 is the successor benchmark of the SPECjAppServer2004 benchmark developed by the Standard Performance Evaluation Corp. [SPEC]; SPECjEnterprise is a trademark of SPEC. The SPECjEnterprise2009 results or findings in this publication have not been reviewed or accepted by SPEC, therefore no comparison nor performance inference can be made against any published SPEC result.) The target Java EE platform we consider is Oracle WebLogic Server (WLS). The extraction is based on monitoring data that is collected during operation using the WebLogic Diagnostics Framework (WLDF). As a performance model, we selected the Palladio Component Model (PCM). PCM is a sophisticated performance modeling framework with mature tool support. In contrast to low level mathematical models like, e.g., queueing networks, PCM is a high-level UML-like design-oriented model that captures the performance-relevant aspects of the system architecture. This makes PCM models easy to understand and use by software developers. We begin by providing some background on the technologies we use, focusing on the WLDF monitoring framework and the PCM models. We then describe the model extraction method in more detail. Finally, we present the case study we conducted and conclude with a summary.
[27] Nicolas Geoffray, Gael Thomas, Gilles Muller, Pierre Parrend, Stephane Frenot, and Bertil Folliot. I-jvm: a java virtual machine for component isolation in osgi. In 39th IEEE/IFIP Conference on Dependable Systems and Networks (DSN), 2009. Lisbon, Portugal. [ bib ]
[28] Thomas Goldschmidt, Steffen Becker, and Axel Uhl. FURCAS: Framework for UUID-Retaining Concrete to Abstract Syntax Mappings. In Proceedings of the 5th European Conference on Model Driven Architecture - Foundations and Applications (ECMDA 2009) - Tools and Consultancy Track, 2009. CTIT. 2009. [ bib | Abstract ]
Textual concrete syntaxes for models are beneficial for many reasons. They foster usability and productivity because of their fast editing style, their usage of error markers, autocompletion and quick fixes. Several frameworks and tools from different communities for creating concrete textual syntaxes for models emerged during recent years. However, these approaches failed to provide a solution in general. Open issues are incremental parsing and model updating as well as partial and federated views. Building views on abstract models is one of the key concepts of model-driven engineering. Different views help to present concepts behind a model in a way that they can be understood and edited by different stakeholders or developers in different roles. Within graphical modelling several approaches exist allowing the definition of explicit holistic, partial or combined graphical views for models. On the other hand several frameworks that provide textual editing support for models have been presented over recent years. However, the combination of both principals, meaning textual, editable and decorating views is lacking in all of these approaches. In this presentation, we show FURCAS (Framework for UUID Retaining Concrete to Abstract Syntax Mappings), a textual decorator approach that allows to separately store and manage the textual concrete syntax from the actual abstract model elements. Thereby we allow to define textual views on models that may be partial and/or overlapping concerning other (graphical and/or textual) views.
[29] Thomas Goldschmidt, Steffen Becker, and Axel Uhl. Textual views in model driven engineering. In Proceedings of the 35th EUROMICRO Conference on Software Engineering and Advanced Applications (SEAA), 2009. IEEE. 2009. [ bib | Abstract ]
Building views on abstract models is one of the key concepts of model-driven engineering. Different views help to present concepts behind a model in a way that they can be understood and edited by different stakeholders or developers in different roles. Within graphical modelling several approaches exist allowing the definition of explicit holistic, partial or combined graphical views for models. On the other hand several frameworks that provide textual editing support for models have been presented over recent years. However, the combination of both principals, meaning textual, editable and decorating views is lacking in all of these approaches. In this paper, we introduce a textual decorator approach that allows to separately store and manage the textual concrete syntax from the actual abstract model elements. Thereby we allow to define textual views on models that may be partial and/or overlapping concerning other (graphical and/or textual) views.
[30] Jens Happe, Henning Groenda, and Ralf H. Reussner. Performance Evaluation of Scheduling Policies in Symmetric Multiprocessing Environments. In Proceedings of the 17th IEEE International Symposium on Modelling, Analysis and Simulation of Computer and Telecommunication Systems (MASCOTS'09), 2009. [ bib | .pdf | Abstract ]
The shift of hardware architecture towards parallel execution led to a broad usage of multi-core processors in desktop systems and in server systems. The benefit of additional processor cores for software performance depends on the software's parallelism as well as the operating system scheduler's capabilities. Especially, the load on the available processors (or cores) strongly influences response times and throughput of software applications. Hence, a sophisticated understanding of the mutual influence of software behaviour and operating system schedulers is essential for accurate performance evaluations. Multi-core systems pose new challenges for performance analysis and developers of operating systems. For example, an optimal scheduling policy for multi-server systems, such as shortest remaining processing time (SRPT) for single-server systems, is not yet known in queueing theory. In this paper, we present a detailed experimental evaluation of general purpose operating system (GPOS) schedulers in symmetric multiprocessing (SMP) environments. In particular, we are interested in the influence of multiprocessor load balancing on software performance. Additionally, the evaluation includes effects of GPOS schedulers that can also occur in single-processor environments, such as I/Oboundedness of tasks and different prioritisation strategies. The results presented in this paper provide the basis for the future development of more accurate performance models of today's software systems.
[31] Jens Happe, Hui Li, and Wolfgang Theilmann. Black-box Performance Models: Prediction based on Observation. In Proceedings of the 1st International Workshop on the Quality of Service-Oriented Software Systems (QUASOSS), 2009, pages 19-24. ACM, New York, NY, USA. 2009. [ bib | DOI | Abstract ]
Software performance engineering enables software architects to find potential performance problems, such as bottlenecks and long delays, prior to implementation and testing. Such early feedback on the system's performance is essential to develop and maintain efficient and scalable applications. However, the unavailability of data necessary to design performance models often hinders its application in practice. During system maintenance, the existing system has to be included into the performance model. For large, heterogeneous, and complex systems that have grown over time, modelling becomes infeasible due to the sheer size and complexity of the systems. Re-engineering approaches also fail due to the large and heterogeneous technology stack. Especially for such systems, performance prediction is essential. In this position statement, we propose goal-oriented abstractions of large parts of a software system based on systematic measurements. The measurements provide the information necessary to determine Black-box Performance Models that directly capture the influence of a system's usage and workload on performance (response time, throughput, and resource utilisation). We outline the research challenges that need to be addressed in order to apply Black-box Performance Models.
[32] Jens Happe. Predicting Software Performance in Symmetric Multi-core and Multiprocessor Environments, volume 3 of The Karlsruhe Series on Software Design and Quality. Universitätsverlag Karlsruhe, 07 2009. [ bib | DOI | Abstract ]
With today's rise of multi-core processors, concurrency becomes a ubiquitous challenge in software development. Performance prediction methods have to reflect the influence of multiprocessing environments on software performance in order to help software architects to find potential performance problems during early development phases. In this thesis, we address the influence of the operating system scheduler on software performance in symmetric multiprocessing environments.
[33] Michael Hauck, Michael Kuperberg, Klaus Krogmann, and Ralf Reussner. Modelling Layered Component Execution Environments for Performance Prediction. In Proceedings of the 12th International Symposium on Component Based Software Engineering (CBSE 2009), 2009, number 5582 in LNCS, pages 191-208. Springer. 2009. [ bib | DOI | .html | .pdf | Abstract ]
Software architects often use model-based techniques to analyse performance (e.g. response times), reliability and other extra-functional properties of software systems. These techniques operate on models of software architecture and execution environment, and are applied at design time for early evaluation of design alternatives, especially to avoid implementing systems with insufficient quality. Virtualisation (such as operating system hypervisors or virtual machines) and multiple layers in execution environments (e.g. RAID disk array controllers on top of hard disks) are becoming increasingly popular in reality and need to be reflected in the models of execution environments. However, current component meta-models do not support virtualisation and cannot model individual layers of execution environments. This means that the entire monolithic model must be recreated when different implementations of a layer must be compared to make a design decision, e.g. when comparing different Java Virtual Machines. In this paper, we present an extension of an established model-based performance prediction approach and associated tools which allow to model and predict state-of-the-art layered execution environments, such as disk arrays, virtual machines, and application servers. The evaluation of the presented approach shows its applicability and the resulting accuracy of the performance prediction while respecting the structure of the modelled resource environment.
[34] Christian Henrich, Matthias Huber, Carmen Kempka, Jörn Müller-Quade, and Mario Strefler. Brief announcement: Towards secure cloud computing. In Stabilization, Safety, and Security of Distributed Systems, 11th International Symposium, SSS 2009, Lyon, France, November 3-6, 2009. Proceedings, 2009, pages 785-786. [ bib ]
[35] Jörg Henss and Joachim Kleb. Protégé 4 Backend for Native OWL Persistence. In 11th Intl. Protégé Conference - June 23-26, 2009 - Amsterdam, Netherlands, 2009. [ bib | .pdf ]
[36] Jörg Henss, Joachim Kleb, Stephan Grimm, and Jürgen Bock. A Database Backend for OWL. In Proceedings of the 5th International Workshop on OWL: Experiences and Directions (OWLED 2009), Chantilly, VA, United States, October 23-24, 2009, Rinke Hoeksta and Peter F. Patel-Schneider, editors, 2009, volume 529 of CEUR Workshop Proceedings. CEUR-WS. 2009. [ bib | .pdf | Abstract ]
Most Semantic Web applications are build on top of technology based on the Semantic Web layer cake and the W3C ontology languages RDF(S) and OWL. However RDF(S) embodies a graph abstraction model and thus is represented by triple-based artifacts. Using OWL as a language for Semantic Web knowledge-bases, this abstraction no longer holds. OWL is build up on an axiomatic model representation. Consequential storage systems focusing on the triple-based representation of ontologies seem to be no longer adequate as persistence layer for OWL ontologies. Our proposed system allows for a native mapping of OWL constructs to a database-schema without an unnecessary complex transformation in triples. Our Evaluation shows that our system performs comparable to current OWL storage systems.
[37] Matthias Huber. Approximatives und diskriminatives Mining von gewichteten Graphen. Master's thesis, Universität Karlsruhe (TH), 2009. [ bib | .pdf | Abstract ]
Graphen mit Kantengewichten treten in vielen Anwendungsdomanen auf, wie zum Beispiel in der Bildverarbeitung, der Transportlogistik, oder der Softwaretechnik. Die Analyse von solchen Graphen mittels Graph-Mining- Techniken ist eine lohnenswerte Aufgabe. Jedoch gibt es keinen Graph- Mining-Algorithmus, der in der Lage ist, kantengewichtete Graphen zu analysieren. Bisher wurden Kantengewichte diskretisiert, damit gewichtete Graphen analysiert werden konnten, oder Kantengewichte wurden erst in einem Postprocessing-Schritt betrachtet. In dieser Arbeit wird eine auf Constraints auf Kantengewichten basierende Erweiterung für die Graph-Mining-Algorithmen gSpan und CloseGraph vorgestellt, welche es ermoglicht, Kantengewichte direkt wahrend dem Mining zu betrachten und zu bewerten. Dadurch ergeben sich neue Pruningmoglichkeiten, welche zu Laufzeitgewinnen fuhren konnen. Es werden verschiedene Methoden vorgestellt, Kantengewichte zu bewerten. Des Weiteren werden diese Moglichkeiten bezuglich der Laufzeit und Ergebnisqualitat mit realen Daten aus den Domanen Transportlogistik und Softwaretechnik evaluiert und verglichen. Es wird gezeigt, dass die in dieser Arbeit vorgestellten Erweiterungen bei anlicher Ergebnisqualitat, zu einer Verbesserung der Laufzeit des Graph- Mining-Algorithmus' fuhren.
[38] Jens Kübler and Thomas Goldschmidt. A Pattern Mining Approach Using QVT. In Proceedings of the 5th European Conference on Model Driven Architecture - Foundations and Applications (ECMDA), 2009, Lecture Notes in Computer Science. Springer-Verlag Berlin Heidelberg. 2009. [ bib | .pdf | Abstract ]
Model Driven Software Development (MDSD) has matured over the last few years and is now becoming an established technology. Models are used in various contexts, where the possibility to perform different kinds of analyses based on the modelled applications is one of these potentials. In different use cases during these analyses it is necessary to detect patterns within large models. A general analysis technique that deals with lots of data is pattern mining. Different algorithms for different purposes have been developed over time. However, current approaches were not designed to operate on models.With employing QVT for matching and transforming patterns we present an approach that deals with this problem. Furthermore, we present an idea to use our pattern mining approach to estimate the maintainability of modelled artifacts.
[39] Lucia Kapova and Thomas Goldschmidt. Automated feature model-based generation of refinement transformations. In Proceedings of the 35th EUROMICRO Conference on Software Engineering and Advanced Applications (SEAA), 2009. IEEE. 2009. [ bib | .pdf ]
[40] Samuel Kounev. Wiley Encyclopedia of Computer Science and Engineering, edited by Benjamin W. Wah, chapter Software Performance Evaluation. Wiley-Interscience, John Wiley & Sons Inc., 2009. [ bib | http | .pdf | Abstract ]
Modern software systems are expected to satisfy increasingly stringent requirements for performance and scalability. To avoid the pitfalls of inadequate quality of service, it is important to evaluate the expected performance and scalability characteristics of systems during all phases of their life cycle. At every stage, performance evaluation is carried out with a specific set of goals and constraints. In this article, we present an overview of the major methods and techniques for software performance evaluation. We start by considering the different types of workload models that are typically used in performance evaluation studies. We then discuss performance measurement techniques including platform benchmarking, application profiling and system load testing. Following this, we survey the most common methods and techniques for performance modeling of software systems. We consider the major types of performance models used in practice and discuss their advantages and disadvantages. Finally, we briefly discuss operational analysis as an alternative to queueing theoretic methods.
[41] Samuel Kounev and Christofer Dutz. QPME - A Performance Modeling Tool Based on Queueing Petri Nets. ACM SIGMETRICS Performance Evaluation Review (PER), Special Issue on Tools for Computer Performance Modeling and Reliability Analysis, 36(4):46-51, 2009, ACM, New York, NY, USA. [ bib | .pdf | Abstract ]
Queueing Petri nets are a powerful formalism that can be exploited for modeling distributed systems and analyzing their performance and scalability. By combining the modeling power and expressiveness of queueing networks and stochastic Petri nets, queueing Petri nets provide a number of advantages. In this paper, we present QPME (Queueing Petri net Modeling Environment) - a tool that supports the modeling and analysis of systems using queueing Petri nets. QPME provides an Eclipse-based editor for designing queueing Petri net models and a powerful simulation engine for analyzing the models. After presenting the tool, we discuss the ongoing work on the QPME project and the planned future enhancements of the tool.
[42] Heiko Koziolek and Franz Brosch. Parameter dependencies for component reliability specifications. In Proceedings of the 6th International Workshop on Formal Engineering approaches to Software Components and Architectures (FESCA), 2009, volume 253(1) of ENTCS, pages 23 - 38. Elsevier. 2009. [ bib | DOI | .pdf | Abstract ]
Predicting the reliability of a software system at an architectural level during early design stages can help to make systems more dependable and avoid costs for fixing the implementation. Existing reliability prediction methods for component-based systems use Markov models and assume that the software architect can provide the transition probabilities between individual components. This is however not possible if the components are black boxes, only at the design stage, or not available for testing. We propose a new modelling formalism that includes parameter dependencies into software component reliability specifications. It allows the software architect to only model a system-level usage profile, which a tool then propagates to individual components to determine the transition probabilities of the Markov model. We demonstrate the applicability of our approach by modelling the reliability of a retail management system and conduct reliability predictions.
[43] Klaus Krogmann, Christian M. Schweda, Sabine Buckl, Michael Kuperberg, Anne Martens, and Florian Matthes. Improved Feedback for Architectural Performance Prediction using Software Cartography Visualizations. In Architectures for Adaptive Systems (Proceedings of QoSA 2009), Raffaela Mirandola, Ian Gorton, and Christine Hofmeister, editors, 2009, volume 5581 of Lecture Notes in Computer Science, pages 52-69. Springer. 2009, Best Paper Award. [ bib | DOI | http | Abstract ]
Software performance engineering provides techniques to analyze and predict the performance (e.g., response time or resource utilization) of software systems to avoid implementations with insufficient performance. These techniques operate on models of software, often at an architectural level, to enable early, design-time predictions for evaluating design alternatives. Current software performance engineering approaches allow the prediction of performance at design time, but often provide cryptic results (e.g., lengths of queues). These prediction results can be hardly mapped back to the software architecture by humans, making it hard to derive the right design decisions. In this paper, we integrate software cartography (a map technique) with software performance engineering to overcome the limited interpretability of raw performance prediction results. Our approach is based on model transformations and a general software visualization approach. It provides an intuitive mapping of prediction results to the software architecture which simplifies design decisions. We successfully evaluated our approach in a quasi experiment involving 41 participants by comparing the correctness of performance-improving design decisions and participants' time effort using our novel approach to an existing software performance visualization.
[44] Michael Kuperberg. FOBIC: A Platform-Independent Performance Metric based on Dynamic Java Bytecode Counts. In Proceedings of the 2008 Dependability Metrics Research Workshop, Technical Report TR-2009-002, Felix C. Freiling, Irene Eusgeld, and Ralf Reussner, editors, November 10, 2008, Mannheim, Germany, 2009, pages 7-11. Department of Computer Science, University of Mannheim. [ bib | .pdf | Abstract ]
ICS include supervisory control and data acquisition (SCADA) systems, distributed control systems (DCS), and other control system configurations such as skid-mounted Programmable Logic Controllers (PLC) as are often found in the industrial control sector. In contrast to traditional information processing systems logic executing in ICS has a direct effect on the physical world. These control systems are critical for the operation of complex infrastructures that are often highly interconnected and thus mutually dependent systems. Numerous methodical approaches aim at modeling, analysis and simulation of single systems� behavior. However, modeling the interdependencies between different systems and describing their complex behavior by simulation is still an open issue. Although different modeling approaches from classic network theory to bio-inspired methods can be found in scientific literature a comprehensive method for modeling and simulation of interdependencies among complex systems has still not been established. An overall model is needed to provide security and reliability assessment taking into account various kinds of threats and failures. These metrics are essential for a vulnerability analysis. Vulnerability of a critical infrastructure is defined as the presence of flaws or weaknesses in its design, implementation, operation and/or management that render it susceptible to destruction or incapacitation by a threat, in spite of its capacity to absorb and recover (�resilience�). A significant challenge associated with this model may be to create �what-if� scenarios for the analysis of interdependencies. Interdependencies affect the consequences of single or multiple failures or disruption in interconnected systems. The different types of interdependencies can induce feedback loops which have accelerating or retarding effects on a systems response as observed in system dynamics. Threats to control systems can come from numerous sources, including hostile governments, terrorist groups, disgruntled employees, malicious intruders, complexities, accidents, natural disasters and malicious or accidental actions by insiders. The threats and failures can impact ICS themselves as well as underlying (controlled) systems. In previous work seven evaluation criteria have been defined and eight good praxis methods have been selected and are briefly described. Analysis of these techniques is undertaken and their suitability for modeling and simulation of interdependent critical infrastructures in general is hypothesized. With
[45] Michael Kuperberg, Martin Krogmann, and Ralf Reussner. TimerMeter: Quantifying Accuracy of Software Times for System Analysis. In Proceedings of the 6th International Conference on Quantitative Evaluation of SysTems (QEST) 2009, 2009. [ bib | .pdf ]
[46] Michael Kuperberg, Fouad Omri, and Ralf Reussner. Using Heuristics to Automate Parameter Generation for Benchmarking of Java Methods. In Proceedings of the 6th International Workshop on Formal Engineering approaches to Software Components and Architectures, York, UK, 28th March 2009 (ETAPS 2009, 12th European Joint Conferences on Theory and Practice of Software), 2009. [ bib | .pdf | Abstract ]
Automated generation of method parameters is needed in benchmarking scenarios where manual or random generation of parameters are not suitable, do not scale or are too costly. However, for a method to execute correctly, the generated input parameters must not violate implicit semantical constraints, such as ranges of numeric parameters or the maximum length of a collection. For most methods, such constraints have no formal documentation, and human-readable documentation of them is usually incomplete and ambiguous. Random search of appropriate parameter values is possible but extremely ineffective and does not pay respect to such implicit constraints. Also, the role of polymorphism and of the method invocation targets is often not taken into account. Most existing approaches that claim automation focus on a single method and ignore the structure of the surrounding APIs where those exist. In this paper, we present HEURIGENJ, a novel heuristics-based approach for automatically finding legal and appropriate method input parameters and invocation targets, by approximating the implicit constraints imposed on them. Our approach is designed to support systematic benchmarking of API methods written in the Java language. We evaluate the presented approach by applying it to two frequently-used packages of the Java platform API, and demonstrating its coverage and effectiveness.
[47] Gero Mühl, Arnd Schröter, Helge Parzyjegla, Samuel Kounev, and Jan Richling. Stochastic Analysis of Hierarchical Publish/Subscribe Systems. In Proceedings of the 15th International European Conference on Parallel and Distributed Computing (Euro-Par 2009), Delft, The Netherlands, August 25-28, 2009., 2009. Springer Verlag. 2009, Acceptance Rate (Full Paper): 33%. [ bib | http | .pdf | Abstract ]
With the gradual adoption of publish/subscribe systems in mission critical areas, it is essential that systems are subjected to rigorous performance analysis before they are put into production. However, existing approaches to performance modeling and analysis of publish/subscribe systems suffer from many limitations that seriously constrain their practical applicability. In this paper, we present a generalized method for stochastic analysis of publish/subscribe systems employing identity-based hierarchical routing. The method is based on an analytical model that addresses the major limitations underlying existing work in this area. In particular, it supports arbitrary broker overlay topologies and allows to set workload parameters, e.g., publication rates and subscription lifetimes, individually for each broker. The analysis is illustrated by a running example that helps to gain better understanding of the derived mathematical relationships.
[48] Anne Martens, Franz Brosch, and Ralf Reussner. Optimising multiple quality criteria of service-oriented software architectures. In Proceedings of the 1st international workshop on Quality of service-oriented software systems (QUASOSS), 2009, pages 25-32. ACM, New York, NY, USA. 2009. [ bib | DOI | .pdf | Abstract ]
Quantitative prediction of quality criteria (i.e. extra-functional properties such as performance, reliability, and cost) of service-oriented architectures supports a systematic software engineering approach. However, various degrees of freedom in building a software architecture span a large, discontinuous design space. Currently, solutions with a good trade-off between multiple quality criteria have to be found manually. We propose an automated approach to search the design space by modifying the architectural models, to improve the architecture with respect to multiple quality criteria, and to find optimal architectural models. The found optimal architectural models can be used as an input for trade-off analyses and thus allow systematic engineering of high-quality software architectures. Using this approach, the design of a high-quality component-based software system is eased for the software architect and thus saves cost and effort. Our approach applies a multi-criteria genetic algorithm to software architectures modelled with the Palladio Component Model (PCM). Currently, the method supports quantitative performance and reliability prediction, but it can be extended to other quality properties such as cost as well.
[49] Anne Martens and Heiko Koziolek. Automatic, model-based software performance improvement for component-based software designs. In Proceedings of the Sixth International Workshop on Formal Engineering approaches to Software Components and Architectures (FESCA 2009), 2009, volume 253(1) of Electronic Notes in Theoretical Computer Science, pages 77 - 93. Elsevier. 2009. [ bib | DOI | .pdf | Abstract ]
Formal performance prediction methods, based on queueing network models, allow evaluating software architectural designs for performance. Existing methods provide prediction results such as response times and throughputs, but do not guide the software architect on how to improve the design. We propose a novel approach to optimise the expected performance of component-based software designs by automatically generating and evaluating design alternatives. The design space spanned by different design options (e.g. available components and configuration options) is systematically explored using metaheuristic search techniques and performance-domain heuristics. The gap between applying formal performance predictions and actually improving the design of a system can thus be closed. This paper presents a formal description and a prototypical implementation of our approach with a proof-of-concept case study.
[50] Christof Momm, Michael Gebhart, and Sebastian Abeck. A model-driven approach for monitoring business performance in web service compositions. In Fourth International Conference on Internet and Web Applications and Services (ICIW 2009), 2009. IEEE Computer Society Press, Venice, Italy. 2009. [ bib ]
[51] Barbara Paech, Andreas Oberweis, and Ralf Reussner. Qualität von Geschäftsprozessen und Unternehmenssoftware - Eine Thesensammlung. In Software Engineering (Workshops), Jürgen Münch and Peter Liggesmeyer, editors, 2009, volume 150 of LNI, pages 223-228. GI. 2009. [ bib ]
[52] Pierre Parrend. Java Software, chapter Security for Java Platforms. Nova Publishers, New York, 2009. [ bib | http | Abstract ]
The Java environment is composed of two main parts: the Java language and the Java virtual machine. It is designed with the assumption that no software entity is to be trusted and therefore that each need to be checked. The first success of Java was the inception of Java Applets which enabled fully untrusted code provided by unknown web sites to be executed in a browser. This feature demonstrated the isolation brought by the Java Virtual Machine (JVM) between the applications and the underlying operating system. However, the evolution of Java systems from mono-application to multi-component systems induce new vulnerabilities developers are not aware of. This requires that additional security mechanisms are used to support secure Java environments. This survey presents an overview of the security issues for the Java language and Virtual Machine. The default security model is defined and explained. Its three main components are the Java language itself, the Bytecode validation at load time and modularity supports such as the class loaders and permission domains. Known vulnerabilities are presented. They originate either in the JVM or in the code of applications. Two approaches exist for describing code vulnerabilities: source code and Bytecode. This duality enables to identify them both during development through manual code review and tools, and in an automated manner during code deployment or installation. Security extensions for the Java Execution Environment and tools for writing secure Java code are presented. They are of three types: platform extensions, static analysis approaches and behavior injection. Platform extensions consist in strengthening the isolation between components (beans, etc.) and providing support for resource consumption accounting and control. Static analysis is often performed through generic tools that improve the code quality and thus reduce the number of exploitable bugs in the Java code. Some of these tools, such as FindBugs, encompass security-specific bugs, and some, as JSLint are dedicated to security analysis. Bytecode injection enables to introduce security checks in the core of the code. It can be performed with the developers involved, for instance through aspect-oriented programming, or transparently, through Bytecode injection or meta-programming. An overview of the existing protection mechanisms for Java systems according to the life-cycle moment they are enforced and to the development overhead they imply concludes this work
[53] Pierre Parrend and Stéphane Frénot. Security benchmarks of osgi platforms: Toward hardened osgi. Software: Practice & Experience, 2009. [ bib | http | Abstract ]
OSGi Platforms are Extensible Component Platforms, ie they support the dynamic and transparent installation of components that are provided by third party providers at runtime. This feature makes systems built using OSGi extensible and adaptable but opens a dangerous attack vector that has not been considered as such until recently. Performing a security benchmark of the OSGi platform is therefore necessary to gather knowledge related to the weaknesses it introduces as well as to propose enhancements. A suitable Vulnerability Pattern is defined. The attacks that can be performed through malicious OSGi components are identified. Quantitative analysis is then performed so as to characterize the origin of the vulnerabilities and the target and consequences of attacks. The assessment of the security status of the various implementations of the OSGi Platform and of existing security mechanisms is done through a metric we introduce, the Protection Rate. Based on these benchmarks, OSGi-specific security enhancements are identified and evaluated. First recommendations are given. Then evaluation is performed through the Protection Rate metric and performance analysis. Lastly, further requirements for building secure OSGi Platforms are identified.
[54] Kai Sachs, Samuel Kounev, Stefan Appel, and Alejandro Buchmann. Benchmarking of Message-Oriented Middleware (Poster Paper). In Proceedings of the 3rd ACM International Conference on Distributed Event-Based Systems (DEBS-2009), Nashville, TN, USA, July 6-9, 2009, 2009. ACM, New York, NY, USA. 2009. [ bib | http | .pdf | Abstract ]
In this poster, we provide an overview of our past and current research in the area of Message-Oriented Middleware (MOM) performance benchmarks. Our main research motivation is a) to gain a better understanding of the performance of MOM, b) to show how to use benchmarks for the evaluation of performance aspects and c)to establish performance modeling techniques. For a better understanding, we first introduce the Java Message Service (JMS) standard. Afterwards, we provide an overview of our work on MOM benchmark development, i.e., we present the SPECjms2007 benchmark and the new jms2009-PS, a test harness designed specifically for JMS-based pub/sub. We outline a new case study with jms2009-PS and present first results of our work-in-progress.
[55] Kai Sachs, Samuel Kounev, Jean Bacon, and Alejandro Buchmann. Benchmarking message-oriented middleware using the SPECjms2007 benchmark. Performance Evaluation, 66(8):410-434, 2009, Elsevier Science Publishers B. V., Amsterdam, The Netherlands. [ bib | DOI | http | .pdf | Abstract ]
Message-oriented middleware (MOM) is at the core of a vast number of financial services and telco applications, and is gaining increasing traction in other industries, such as manufacturing, transportation, health-care and supply chain management. Novel messaging applications, however, pose some serious performance and scalability challenges. In this paper, we present a methodology for performance evaluation of MOM platforms using the SPECjms2007 benchmark which is the world's first industry-standard benchmark specialized for MOM. SPECjms2007 is based on a novel application in the supply chain management domain designed to stress MOM infrastructures in a manner representative of real-world applications. In addition to providing a standard workload and metrics for MOM performance, the benchmark provides a flexible performance analysis framework that allows users to tailor the workload to their requirements. The contributions of this paper are: i) we present a detailed workload characterization of SPECjms2007 with the goal to help users understand the internal components of the workload and the way they are scaled, ii) we show how the workload can be customized to exercise and evaluate selected aspects of MOM performance, iii) we present a case study of a leading JMS platform, the BEA WebLogic server, conducting an in-depth performance analysis of the platform under a number of different workload and configuration scenarios. The methodology we propose is the first one that uses an industry-standard benchmark providing both a representative workload as well as the ability to customize it to evaluate the features of MOM platforms selectively.
[56] Johannes Stammel and Ralf Reussner. Kamp: Karlsruhe architectural maintainability prediction. In Proceedings of the 1. Workshop des GI-Arbeitskreises Langlebige Softwaresysteme (L2S2): "Design for Future - Langlebige Softwaresysteme", Gregor Engels, Ralf Reussner, Christof Momm, and Stefan Sauer, editors, 2009, pages 87-98. [ bib | http | Abstract ]
In their lifetime software systems usually need to be adapted in order to fit in a changing environment or to cover new required functionality. The effort necessary for implementing changes is related to the maintainability of the software system. Therefore, maintainability is an important quality aspect of software systems. Today Software Architecture plays an important role in achieving software quality goals. Therefore, it is useful to evaluate software architectures regarding their impact on the quality of the program. However, unlike other quality attributes, such as performance or reliability, there is relatively less work on the impact of the software architecture on maintainability in a quantitative manner. In particular, the cost of software evolution not only stems from software-development activities, such as reimplementation, but also from software management activities, such as re-deployment, upgrade installation, etc. Most metrics for software maintainability base on code of object-oriented designs, but not on architectures, and do not consider costs from software management activities. Likewise, existing current architectural maintainability evaluation techniques manually yield just qualitative (and often subjective) results and also do concentrate on software (re-)development costs. In this paper, we present KAMP, the Karlsruhe Architectural Maintainability Prediction Method, a quantitative approach to evaluate the maintainability of software architectures. Our approach estimates the costs of change requests for a given architecture and takes into account re-implementation costs as well as re-deployment and upgrade activities. We combine several strengths of existing approaches. First, our method evaluates maintainability for concrete change requests and makes use of explicit architecture models. Second, it estimates change efforts using semi-automatic derivation of work plans, bottom-up effort estimation, and guidance in investigation of estimation supports (e.g. design and code properties, team organization, development environment, and other influence factors).

 TOP

Publications 2008

2008
[1] Achim Baier, Steffen Becker, Martin Jung, Klaus Krogmann, Carsten Röttgers, Niels Streekmann, Karsten Thoms, and Steffen Zschaler. Handbuch der Software-Architektur, chapter Modellgetriebene Software-Entwicklung, pages 93-122. dPunkt.verlag Heidelberg, 2 edition, December 2008. [ bib ]
[2] Heiko Koziolek, Steffen Becker, Jens Happe, and Ralf Reussner. Evaluating performance of software architecture models with the palladio component model. In Model-Driven Software Development: Integrating Quality Assurance, Jörg Rech and Christian Bunse, editors, December 2008, pages 95-118. IDEA Group Inc. December 2008. [ bib ]
[3] Pierre Parrend. Software Security Models for Service-Oriented Programming (SOP) Platforms. PhD thesis, Institut National des Sciences Appliquées de Lyon, France, December 2008. [ bib ]
[4] Ralf H. Reussner and Wilhelm Hasselbring. Handbuch der Software-Architektur. dPunkt.verlag Heidelberg, 2 edition, December 2008. [ bib ]
[5] Christoph Rathfelder and Henning Groenda. Towards an Architecture Maintainability Maturity Model (AM3). Softwaretechnik-Trends, 28(4):3-7, November 2008, GI (Gesellschaft fuer Informatik), Bonn, Germany. [ bib | .pdf ]
[6] Christoph Rathfelder, Henning Groenda, and Ralf Reussner. Software Industrialization and Architecture Certification. In Industrialisierung des Software-Managements: Fachtagung des GI-Fachausschusses Management der Anwendungsentwicklung und -Wartung im Fachbereich Wirtschaftsinformatik (WI-MAW), Georg Herzwurm and Martin Mikusz, editors, volume 139 of Lecture Notes in Informatics (LNI), pages 169-180. November 2008. [ bib ]
[7] Heiko Koziolek, Steffen Becker, Jens Happe, and Ralf Reussner. Life-Cycle Aware Modelling of Software Components. In Proceedings of the 11th International Symposium on Component-Based Software Engineering (CBSE), October 2008, Lecture Notes in Computer Science, pages 278-285. Springer-Verlag Berlin Heidelberg, Universität Karlsruhe (TH), Karlsruhe, Germany. October 2008. [ bib | Abstract ]
Current software component models insufficiently reflect the different stages of component life-cycle, which involves design, implementation, deployment, and runtime. Therefore, reasoning techniques for component-based models (e.g., protocol checking, QoS predictions, etc.) are often limited to a particular life-cycle stage. We propose modelling software components in different design stages, after implemenatation, and during deployment. Abstract models for newly designed components can be combined with refined models for already implemented components. As a proof-of-concept, we have implemented the new modelling techniques as part of our Palladio Component Model (PCM).
[8] Michael Kuperberg, Klaus Krogmann, and Ralf Reussner. Performance Prediction for Black-Box Components using Reengineered Parametric Behaviour Models. In Proceedings of the 11th International Symposium on Component Based Software Engineering (CBSE 2008), Karlsruhe, Germany, 14th-17th October 2008, October 2008, volume 5282 of Lecture Notes in Computer Science, pages 48-63. Springer-Verlag Berlin Heidelberg. October 2008. [ bib | .pdf | Abstract ]
In component-based software engineering, the response time of an entire application is often predicted from the execution durations of individual component services. However, these execution durations are specific for an execution platform (i.e. its resources such as CPU) and for a usage profile. Reusing an existing component on different execution platforms up to now required repeated measurements of the concerned components for each relevant combination of execution platform and usage profile, leading to high effort. This paper presents a novel integrated approach that overcomes these limitations by reconstructing behaviour models with platform-independent resource demands of bytecode components. The reconstructed models are parameterised over input parameter values. Using platform-specific results of bytecode benchmarking, our approach is able to translate the platform-independent resource demands into predictions for execution durations on a certain platform. We validate our approach by predicting the performance of a file sharing application.
[9] Pierre Parrend and Stéphane Frénot. Classification of component vulnerabilities in Java service oriented programming (SOP) platforms. In Conference on Component-based Software Engineering (CBSE'2008), October 2008, volume 5282/2008 of LNCS. Springer Berlin / Heidelberg, Karlsruhe, Germany. October 2008. [ bib | http | Abstract ]
Java-based systems have evolved from stand-alone applications to multi-component to Service Oriented Programming (SOP) platforms. Each step of this evolution makes a set of Java vulnerabilities directly exploitable by malicious code: access to classes in multi-component platforms, and access to object in SOP, is granted to them with often no control. This paper defines two taxonomies that characterize vulnerabilities in Java components: the vulnerability categories, and the goals of the attacks that are based on these vulnerabilities. The `vulnerability category' taxonomy is based on three application types: stand-alone, class sharing, and SOP. Entries express the absence of proper security features at places they are required to build secure component-based systems. The `goal' taxonomy is based on the distinction between undue access, which encompasses the traditional integrity and confidentiality security properties, and denial-of-service. It provides a matching between the vulnerability categories and their consequences. The exploitability of each vulnerability is validated through the development of a pair of malicious and vulnerable components. Experiments are conducted in the context of the OSGi Platform. Based on the vulnerability taxonomies, recommendations for writing hardened component code are issued.
[10] Frank Eichinger, Klemens Böhm, and Matthias Huber. Mining Edge-Weighted Call Graphs to Localise Software Bugs. In Proceedings of the 8th European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD), Walter Daelemans, Bart Goethals, and Katharina Morik, editors, September 2008, volume 5211 of Lecture Notes in Computer Science, pages 333-348. Springer-Verlag Berlin Heidelberg, Germany. September 2008, Part I. [ bib | DOI | .pdf | Abstract ]
An important problem in software engineering is the automated discovery of noncrashing occasional bugs. In this work we address this problem and show that mining of weighted call graphs of program executions is a promising technique. We mine weighted graphs with a combination of structural and numerical techniques. More specifically, we propose a novel reduction technique for call graphs which introduces edge weights. Then we present an analysis technique for such weighted call graphs based on graph mining and on traditional feature selection schemes. The technique generalises previous graph mining approaches as it allows for an analysis of weights. Our evaluation shows that our approach finds bugs which previous approaches cannot detect so far. Our technique also doubles the precision of finding bugs which existing techniques can already localise in principle.
[11] Klaus Krogmann, Michael Kuperberg, and Ralf Reussner. Reverse Engineering of Parametric Behavioural Service Performance Models from Black-Box Components. In MDD, SOA und IT-Management (MSI 2008), Ulrike Steffens, Jan Stefan Addicks, and Niels Streekmann, editors, September 2008, pages 57-71. GITO Verlag, Oldenburg. September 2008. [ bib | .pdf | Abstract ]
Integrating heterogeneous software systems becomes increasingly important. It requires combining existing components to form new applications. Such new applications are required to satisfy non-functional properties, such as performance. Design-time performance prediction of new applications built from existing components helps to compare design decisions before actually implementing them to the full, avoiding costly prototype and glue code creation. But design-time performance prediction requires understanding and modeling of data flow and control flow accross component boundaries, which is not given for most black-box components. If, for example one component processes and forwards files to other components, this effect should be an explicit model parameter to correctly capture its performance impact. This impact should also be parameterised over data, but no reverse engineering approach exists to recover such dependencies. In this paper, we present an approach that allows reverse engineering of such behavioural models, which is applicable for blackbox components. By runtime monitoring and application of genetic programming, we recover functional dependencies in code, which then are expressed as parameterisation in the output model. We successfully validated our approach in a case study on a file sharing application, showing that all dependencies could correctly be reverse engineered from black-box components.
[12] Christof Momm and Christoph Rathfelder. Model-based Management of Web Service Compositions in Service-Oriented Architectures. In MDD, SOA und IT-Management (MSI 2008), Ulrike Steffens, Jan Stefan Addicks, and Niels Streekmann, editors, Oldenburg, Germany, September 24, 2008, pages 25-40. GITO-Verlag, Berlin, Germany. September 2008. [ bib | .pdf | Abstract ]
Web service compositions (WSC), as part of a service-oriented architecture (SOA), have to be managed to ensure compliance with guaranteed service levels. In this context, a high degree of automation is desired, which can be achieved by applying autonomic computing concepts. This paper particularly focuses the autonomic management of semi-dynamic compositions. Here, for each included service several variants are available that differ with regard to the service level they offer. Given this scenario, we first show how to instrument WSC in order to allow a controlling of the service level through switching the employed service variant. Second, we show how the desired self-manageability can be designed and implemented by means of a WSC manageability infrastructure. The presented approach is based on widely accepted methodologies and standards from the area of application and web service management, in particular the WBEM standards.
[13] Pierre Parrend and Stéphane Frénot. More vulnerabilities in the Java/OSGi platform: A focus on bundle interactions. Technical Report RR-6649, INRIA, September 2008. [ bib | http | Abstract ]
Extensible Component Platforms can discover and install code during runtime. Although this feature introduces flexibility, it also brings new security threats: malicious components can quite easily be installed and exploit the rich programming environment and interactions with other components to perform attacks against the system. One example of such environments is the Java/OSGi Platform, which widespreads in the industrial world. Attacks from one component against another can not be prevented through conventional security mechanisms, since they exploit the lack of proper isolation between them: components often share classes and objects. This reports intends to list the vulnerabilities that a component can contain, both from the literature and from our own experience. The Vulnerable Bundle catalog gathers this knowledge. It provides informations related to the characteristics of the vulnerabilities, their consequence, the security mechanisms that would help prevent their exploitation, as well as to the implementation state of the proof-of-concept bundles that are developed to prove that the vulnerability is actually exploitable. The objective of vulnerability classification is of course to provide tools for identifying and preventing them. A first assessment is performed with existing tools, such as Java Permission and FindBugs, and a specific prototype we develop, WBA ( Weak Bundle Analysis), and manual code review.
[14] Lucia Kapova, Tomas Bures, and Petr Hnetynka. Software Engineering Research, Management and Applications, volume 150 of Studies in Computational Intelligence, chapter Preserving Intentions in SOA Business Process development, pages 59-72. Springer-Verlag Berlin Heidelberg, Prague, August 20-22 2008. [ bib | http ]
[15] Jens Happe. Predicting Software Performance in Symmetric Multi-core and Multiprocessor Environments. Dissertation, University of Oldenburg, Germany, August 2008. [ bib | .pdf | Abstract ]
With today's rise of multi-core processors, concurrency becomes a ubiquitous challenge in software development. Concurrency allows the improvement of software performance by exploiting available processor cores. Performance prediction methods have to reflect the influence of multiprocessing environments on software performance in order to help software architects to find potential performance problems during early development phases. In this thesis, we address the influence of the operating system scheduler on software performance in symmetric multiprocessing environments. We propose a performance modelling framework for operating system schedulers such as Windows and Linux. Furthermore, the influence of the middleware on software performance is addressed by a performance modelling approach to message-oriented middleware. A series of case studies demonstrates that both techniques reduce the prediction error to less than 5% to 10% in most cases.
[16] Frank Eichinger, Klemens Böhm, and Matthias Huber. Improved Software Fault Detection with Graph Mining. In Proceedings of the 6th International Workshop on Mining and Learning with Graphs (MLG) at ICML, Samuel Kaski, S.V.N. Vishwanathan, and Stefan Wrobel, editors, July 2008. Helsinki, Finnland. [ bib | .pdf | Abstract ]
This work addresses the problem of discovering bugs in software development. We investigate the utilisation of call graphs of program executions and graph mining algorithms to approach this problem. We propose a novel reduction technique for call graphs which introduces edge weights. Then, we present an analysis technique for such weighted call graphs based on graph mining and on traditional feature selection. Our new approach finds bugs which could not be detected so far. With regard to bugs which can already be localised, our technique also doubles the precision of finding them.
[17] Christoph Rathfelder and Henning Groenda. iSOAMM: An independent SOA Maturity Model. In Proceedings of the 8th IFIP International Conference on Distributed Applications and Interoperable Systems (DAIS 2008), Olso, Norway, June 4-6, 2008, volume 5053/2008 of Lecture Notes in Computer Science (LNCS), pages 1-15. Springer-Verlag, Berlin, Heidelberg. June 2008. [ bib | http | .pdf | Abstract ]
The implementation of an enterprise-wide Service Oriented Architecture (SOA) is a complex task. In most cases, evolutional approaches are used to handle this complexity. Maturity models are a possibility to plan and control such an evolution as they allow evaluating the current maturity and identifying current shortcomings. In order to support an SOA implementation, maturity models should also support in the selection of the most adequate maturity level and the deduction of a roadmap to this level. Existing SOA maturity models provide only weak assistance with the selection of an adequate maturity level. Most of them are developed by vendors of SOA products and often used to promote their products. In this paper, we introduce our independent SOA Maturity Model (iSOAMM), which is independent of the used technologies and products. In addition to the impacts on IT systems, it reflects the implications on organizational structures and governance. Furthermore, the iSOAMM lists the challenges, benefits and risks associated with each maturity level. This enables enterprises to select the most adequate maturity level for them, which is not necessarily the highest one.
[18] Landry Chouambe, Benjamin Klatt, and Klaus Krogmann. Reverse Engineering Software-Models of Component-Based Systems. In 12th European Conference on Software Maintenance and Reengineering, Kostas Kontogiannis, Christos Tjortjis, and Andreas Winter, editors, April 1-4 2008, pages 93-102. IEEE Computer Society, Athens, Greece. April 1-4 2008. [ bib | .pdf | Abstract ]
An increasing number of software systems is developed using component technologies such as COM, CORBA, or EJB. Still, there is a lack of support to reverse engineer such systems. Existing approaches claim reverse engineering of components, but do not support composite components. Also, external dependencies such as required interfaces are not made explicit. Furthermore, relaxed component definitions are used, and obtained components are thus indistinguishable from modules or classes. We present an iterative reverse engineering approach that follows the widely used definition of components by Szyperski. It enables third-party reuse of components by explicitly stating their interfaces and supports composition of components. Additionally, components that are reverse engineered with the approach allow reasoning on properties of software architectures at the model level. For the approach, source code metrics are combined to recognize components. We discuss the selection of source code metrics and their interdependencies, which were explicitly taken into account. An implementation of the approach was successfully validated within four case studies. Additionally, a fifth case study shows the scalability of the approach for an industrial-size system.
[19] Christof Momm, Christoph Rathfelder, Ignacio Pérez Hallerbach, and Sebastian Abeck. Manageability Design for an Autonomic Management of Semi-Dynamic Web Service Compositions. In Proceedings of the Network Operations and Management Symposium (NOMS 2008), Salvador, Bahia, Brazil, April 7-11, 2008, pages 839-842. IEEE. April 2008. [ bib | DOI | .pdf | Abstract ]
Web service compositions (WSC), as part of a service- oriented architecture (SOA), have to be managed to ensure compliance with guaranteed service levels. In this context, a high degree of automation is desired, which can be achieved by applying autonomic computing concepts. This paper particularly focuses the autonomic management of semi-dynamic compositions. Here, for each included service several variants are available that differ with regard to the service level they offer. Given this scenario, we first show how to instrument WSC in order to allow a controlling of the service level through switching the employed service variant. Second, we show how the desired self-manageability can be designed and implemented by means of a WSC manageability infrastructure. The presented approach is based on widely accepted methodologies and standards from the area of application and web service management, in particular the WBEM standards.
[20] Mircea Trifu. Using dataflow information for concern identification in object-oriented software systems. In Proceedings of the 12-th European Conference on Software Maintenance and Reengineering, April 2008, pages 193-202. IEEE. April 2008. [ bib ]
[21] Pierre Parrend and Stéphane Frénot. Component-based access control: Secure software composition through static analysis. In Software Composition (SC'2008), March 2008, volume 4954/2008 of LNCS, pages 68-83. Springer Berlin / Heidelberg. March 2008. [ bib | http | Abstract ]
Extensible Component Platforms support the discovery, installation, starting, uninstallation of components at runtime. Since they are often targeted at mobile resource-constraint devices, they have both strong performance and security requirements. The current security model for Java systems, Permissions, are based on call stack analysis. They proves to be very time-consuming, which makes them difficult to use in production environments. We therefore define the Component-Based Access Control (CBAC) Security Model, which aims at emulating Java Permissions through static analysis at the installation phase ofthe components. CBAC is based on a fully declarative approach, thatmakes it possible to tag arbitrary meth- ods as sensitive. A formal model is defined to guarantee that a given component have sufficientaccess rights, and that dependencies between components are taken into account. A first implementation of the model is provided for the OSGi Platform, using the ASM library for code anal- ysis. Performancetests show that the cost of CBAC at install time is negligible, becauseit is executed together with digital signature which is much more costly. Moreover, contrary to Java Permissions, the CBAC security model does not imply any runtime overhead.
[22] Thomas Kappler, Heiko Koziolek, Klaus Krogmann, and Ralf H. Reussner. Towards Automatic Construction of Reusable Prediction Models for Component-Based Performance Engineering. In Software Engineering 2008, February 18-22 2008, volume 121 of Lecture Notes in Informatics, pages 140-154. Bonner Köllen Verlag, Munich, Germany. February 18-22 2008. [ bib | .pdf | Abstract ]
Performance predictions for software architectures can reveal performance bottlenecks and quantitatively support design decisions for different architectural alternatives. As software architects aim at reusing existing software components, their performance properties should be included into performance predictions without the need for manual modelling. However, most prediction approaches do not include automated support for modelling implemented components. Therefore, we propose a new reverse engineering approach, which generates Palladio performance models from Java code. In this paper, we focus on the static analysis of Java code, which we have implemented as an Eclipse plugin called Java2PCM. We evaluated our approach on a larger component-based software architecture, and show that a similar prediction accuracy can be achieved with generated models compared to completely manually specified ones.
[23] Rainer Böhme and Ralf Reussner. Validation of Predictions with Measurements. In Dependability Metrics, volume 4909 of Lecture Notes in Computer Science, chapter 3, pages 14-18. Springer-Verlag Berlin Heidelberg, 2008. [ bib | .pdf | Abstract ]
This chapter discusses ways to validate metrics and raises awareness for possible caveats if metrics are used in a social environment.
[24] Rainer Böhme and Ralf Reussner. On metrics and measurements. In Dependability Metrics, volume 4909 of Lecture Notes in Computer Science, chapter 2, pages 7-13. Springer-Verlag Berlin Heidelberg, 2008. [ bib | .pdf | Abstract ]
The following chapter attempts to define the notions of metric and measurement which underlie this book. It further elaborates on general properties of metrics and introduces useful terms and concepts from measurement theory, without being overly formal.
[25] Steffen Becker. Coupled Model Transformations. In WOSP '08: Proceedings of the 7th International Workshop on Software and performance, Princeton, NJ, USA, 2008, pages 103-114. ACM, New York, NY, USA. 2008. [ bib | DOI | Abstract ]
Model-driven performance prediction methods use abstract design models to predict the performance of the modelled system during early development stages. However, performance is an attribute of the running system and not its model. The system contains many implementation details not part of its model but still affecting the performance at run-time. Existing approaches neglect details of the implementation due to the abstraction underlying the design model. Completion components [26] deal with this problem, however, they have to be added manually to the prediction model. In this work, we assume that the system's implementation is generated by a chain of model transformations. In this scenario, the transformation rules determine the transformation result. By analysing these transformation rules, a second transformation can be derived which automatically adds details to the prediction model according to the encoded rules. We call this transformation a coupled transformation as it is coupled to an corresponding model-to-code transformation. It uses the knowledge on the output of the model-to-code transformation to increase performance prediction accuracy. The introduced coupled transformations method is validated in a case study in which a parametrised transformation maps abstract component connectors to realisations in different RPC calls. In this study, the corresponding coupled transformation captures the RPC's details with a prediction error of less than 5%.
[26] Steffen Becker. Dependability Metrics, volume 4909 of Lecture Notes in Computer Science, chapter Quality of Service Modeling Language, pages 43-47. Springer-Verlag Berlin Heidelberg, 2008. [ bib | .pdf | Abstract ]
This chapter gives an overview over the Quality of Service Modeling Language (QML), a language which can be used to describe QoS offerings or needs of specified services.
[27] Steffen Becker. Dependability Metrics, volume 4909 of Lecture Notes in Computer Science, chapter Performance-Related Metrics in the ISO 9126 Standard, pages 204-206. Springer-Verlag Berlin Heidelberg, 2008. [ bib | .pdf | Abstract ]
ISO 9126 [243, 244, 245] is a standard which can be used to describe the quality of software systems. It is based on a quality model that is illustrated in part one of the standard [243]. This model distinguishes quality attributes into internal and external attributes. Internal metrics depend on knowledge on the internal details of the respective software. External metrics can be measured without knowing internal details. Furthermore, the quality model introduces characteristics and sub-characteristicswhich are abstractions of the actual attributes.For example, Usability is an abstraction of Learnability,Understandability, and Operability which each itself again abstracts from the different attributes. The ISO 9126 standard has no characteristic Performance. The closest characteristic to our definition of performance is Efficiency. It is divided into two sub-characteristics: time behaviour and resource behaviour. Some people say this distinction is artificial as time is also a (rare) resource. Nevertheless, the timing behaviour is separated in ISO 9126. The important attributes of Efficiency in ISO 9126 are being described in the external metrics specification. Hence, the followin
[28] Steffen Becker. Coupled Model Transformations for QoS Enabled Component-Based Software Design. PhD thesis, University of Oldenburg, Germany, 2008. [ bib ]
[29] Steffen Becker. Coupled Model Transformations for QoS Enabled Component-Based Software Design, volume 1 of The Karlsruhe Series on Software Design and Quality. Universitätsverlag Karlsruhe, 2008. [ bib ]
[30] Steffen Becker, Tobias Dencker, and Jens Happe. Model-Driven Generation of Performance Prototypes. In Performance Evaluation: Metrics, Models and Benchmarks (SIPEW 2008), 2008, volume 5119 of Lecture Notes in Computer Science, pages 79-98. Springer-Verlag Berlin Heidelberg. 2008. [ bib | DOI | .pdf | Abstract ]
Early, model-based performance predictions help to understand the consequences of design decisions on the performance of the resulting system before the system's implementation becomes available. While this helps reducing the costs for redesigning systems not meeting their extra-functional requirements, performance prediction models have to abstract from the full complexity of modern hard- and software environments potentially leading to imprecise predictions. As a solution, the construction and execution of prototypes on the target execution environment gives early insights in the behaviour of the system under realistic conditions. In literature several approaches exist to generate prototypes from models which either generate code skeletons or require detailed models for the prototype. In this paper, we present an approach which aims at automated generation of a performance prototype based solely on a design model with performance annotations. For the concrete realisation, we used the Palladio Component Model (PCM), which is a component-based architecture modelling language supporting early performance analyses. For a typical three-tier business application, the resulting Java EE code shows how the prototype can be used to evaluate the influence of complex parts of the execution environment like memory interactions or the operating system's scheduler.
[31] Steffen Becker, Mircea Trifu, and Ralf Reussner. Towards Supporting Evolution of Service Oriented Architectures through Quality Impact Prediction. In 1st International Workshop on Automated engineeRing of Autonomous and run-time evolving Systems (ARAMIS 2008), L'Aquila, Italy, 2008. [ bib ]
[32] Jakob Blomer, Fabian Brosig, Andreas Kreidler, Jens Küttel, Achim Kuwertz, Grischa Liebel, Daniel Popovic, Michael Stübs, Alexander M. Turek, Christian Vogel, Thomas Weinstein, and Thomas Wurth. Software Zertifizierung. Technical Report 4/2008, Universität Karlsruhe, Fakultät für Informatik, 2008. [ bib | Abstract ]
Systematische Qualitätssicherung gewinnt im Rahmen des globalenWettbewerbs auch in der Software-Entwicklungsbranche zunehmend an Bedeutung. Vor allem auf dem Weg zur Software-Industrialisierung bzw. zu einer ingenieurmäßigen Software-Entwicklung ist eine durchgängige Qualitätssicherung unabdingbar. Zertifizierungen bieten hierbei die Möglichkeit, die Einhaltung bestimmter Standards und Kriterien durch unabhängige Dritte überprüfen und bescheinigen zu lassen, um die Qualität eines Produktes oder Entwicklungsprozesses zu belegen. Zertifizierungen können sich sowohl auf Produkte und Prozesse als auch auf die Ausbildung und das Wissen von Einzelpersonen beziehen. Da Zertifikate durch unabhängige Prüfinstanzen ausgestellt werden, wird Zertifikaten und deren überprüfbaren Aussagen im Allgemeinen ein deutlich höheres Vertrauen entgegengebracht als Qualitätsversprechen von Software-Herstellern selbst. Unternehmen, die ihre Prozesse beispielsweise nach CMMI zertifizieren lassen, können damit ihre Fähigkeiten unter Beweis stellen, Projekte erfolgreich und mit vorhersagbarer Qualität abschließen zu können. Neben dem Nachweis entsprechender Zertifikate als Diversifikationsmerkmal gegenüber Mitbewerbern können Zertifikate über die Einhaltung von Standards auch durch den Gesetzgeber vorgeschrieben werden. Ein Beispiel hierfür sind Zertifikate aus Hochsicherheitsbereichen wie Atomkraftwerken. Das Seminar wurde wie eine wissenschaftliche Konferenz organisiert: Die Einreichungen wurden in einem zweistufigen Peer-Review-Verfahren begutachtet. In der ersten Stufe wurde eine Begutachtung der studentischen Arbeiten durch Kommilitonen durchgeführt, in der zweiten Stufe eine Begutachtung durch die Betreuer. In verschiedenen Sessions wurden die Artikel an zwei Konferenztagen präsentiert. Die besten Beiträge wurden durch best paper awards ausgezeichnet. Diese gingen an Fabian Brosig für seine Arbeit Cost Benefit Analysis Method (CBAM), an Jakob Blomer für die Arbeit Zertifizierung von Softwarebenchmarks und an Grischa Liebel für die Arbeit SWT - Das Standard Widget Toolkit, denen hiermit noch einmal herzlich zu dieser herausragenden Leistung gratuliert wird. Ergänzend zu den Vorträgen der Seminarteilnehmer wurde ein eingeladener Vortrag gehalten. Herr Dr. Dirk Feuerhelm von der 1&1 Internet AG gab dabei dankenswerterweise in seinem Vortrag mit dem Thema Softskills - Ist das objektorientiert oder modellgetrieben? einen Einblick in die Aufgaben als Leiter der Software-Entwicklung
[33] Franz Brosch, Thomas Goldschmidt, Henning Groenda, Lucia Kapova, Klaus Krogmann, Michael Kuperberg, Anne Martens, Christoph Rathfelder, Ralf Reussner, and Johannes Stammel. Software-industrialisierung. Interner bericht, Universität Karlsruhe, Fakultät für Informatik, Institut für Programmstrukturen und Datenorganisation, Karlsruhe, 2008. [ bib | http | Abstract ]
Die Industrialisierung der Software-Entwicklung ist ein zurzeit sehr stark diskutiertes Thema. Es geht dabei vor allem um die Efizienzsteigerung durch die Steigerung des Standardisierungsgrades, des Automatisierungsgrades sowie eine Erhöhung der Arbeitsteilung. Dies wirkt sich einerseits auf die den Software- Systemen zu Grunde liegenden Architekturen aber auch auf die Entwicklungsprozesse aus. So sind service-orientierte Architekturen ein Beispiel für eine gesteigerte Standardisierung innerhalb von Software-Systemen. Es ist zu berücksichtigen, dass sich die Software-Branche von den klassischen produzierenden Industriezweigen dadurch unterscheidet, dass Software ein immaterielles Produkt ist und so ohne hohe Produktionskosten beliebig oft vervielfältigt werden kann. Trotzdem lassen sich viele Erkenntnisse aus den klassischen Industriezweigen auf die Software-Technik übertragen. Die Inhalte dieses Berichts stammen hauptsächlich aus dem Seminar "Software- Industrialisierung welches sich mit der Professionalisierung der Software- Entwicklung und des Software-Entwurfs beschäftigte. Während die klassische Software-Entwicklung wenig strukturiert ist und weder im Bezug auf Reproduzierbarkeit oder Qualitätssicherung erhöhten Anforderungen genügt, befindet sich die Software-Entwicklung im Rahmen der Industrialisierung in einem Wandel. Dazu zählen arbeitsteiliges Arbeiten, die Einführung von Entwicklungsprozessen mit vorhersagbaren Eigenschaften (Kosten, Zeitbedarf, ...) und in der Folge die Erstellung von Produkten mit garantierbaren Eigenschaften. Das Themenspektrum des Seminars umfasste dabei unter anderem: * Software-Architekturen * Komponentenbasierte Software-Entwicklung * Modellgetriebene Entwicklung * Berücksichtigung von Qualitätseigenschaften in Entwicklungsprozessen Das Seminar wurde wie eine wissenschaftliche Konferenz organisiert: Die Einreichungen wurden in einem zweistufigen Peer-Review-Verfahren begutachtet. In der ersten Stufe wurde eine Begutachtung der studentischen Arbeiten durch Kommilitonen durchgeführt, in der zweiten Stufe eine Begutachtung durch die Betreuer. In verschiedenen Sessions wurden die Artikel an zwei Konferenztagen präsentiert. Der beste Beitrag wurde durch einen Best Paper Award ausgezeichnet. Dieser ging an Benjamin Klatt für seine Arbeit Software Extension Mechanisms, dem hiermit noch einmal herzlich zu dieser herausragenden Leistung gratuliert wird. Ergänzend zu den Vorträgen der Seminarteilnehmer wurde ein eingeladener Vortrag gehalten. Herr Florian Kaltner und Herr Tobias Pohl vom IBM-Entwicklungslabor gaben dabei dankenswerterweise in ihrem Vortrag Einblicke in die Entwicklung von Plugins für Eclipse sowie in die Build-Umgebung der Firmware für die zSeries Mainframe-Server.
[34] Erik Burger. Metamodel Evolution in the Context of a MOF-Based Metamodeling Infrastructure. Diplomarbeit, Universität Karlsruhe (TH), 2008. [ bib | .pdf | Abstract ]
The evolution of software systems can produce incompatibilities with existing data and applications. For this reason, changes have to be well-planned, and developers should know the impact of changes on a software system. This also affects the branch of model-driven development, where changes occur as modification of the metamodels that the system is based on. Models that are instantiated from an earlier metamodel version may not be valid instances if the new version of a metamodel. Also, changes in the interface definition may require adaptations to the modeling tools. In this thesis, the impact of meta-model changes is evaluated for the modeling standards Meta Object Facility (MOF) and the interface definition Java Metadata Interface (JMI), based on the Modeling Infrastructure (MOIN) project at SAP, which includes a MOF- based repository and implements the JMI standard. For the formalisation of changes to MOF-bases metamodels, a Change Metamodel is introduced to describe the transformation of one version of a metamodel to another by the means of modeling itself. The changes are then classifed by their impact on the compatibility of existing model data and the generated JMI interfaces. The description techniques and change classifications presented in this thesis can be used to implement a mechanism that allows metamodel editors to estimate the impact of metamodel changes with the help of modeling tools that can adapt existing data semi-automatically.
[35] Irene Eusgeld, Jens Happe, Philipp Limbourg, Matthias Rohr, and Felix Salfner. Dependability Metrics, volume 4909 of Lecture Notes in Computer Science, chapter Performability, pages 245-254. Springer-Verlag Berlin Heidelberg, 2008. [ bib | .pdf | Abstract ]
Performability combines performance and reliability analysis in order to estimate the quality of service characteristics of a system in the presence of faults. This chapter provides an introduction to performability, discusses its relation to reliability and performance metrics, and presents common models used in performability analysis, such as Markov reward models or Stochastic Petri Nets.
[36] Stéphane Frénot, Yvan Royon, Pierre Parrend, and Denis Beras. Monitoring scheduling for home gateways. In IEEE/IFIP Network Operations and Management Symposium (NOMS), Salvador de Bahia, Brazil, 7-11 April 2008, 2008. [ bib | http | Abstract ]
: In simple and monolithic systems such as our current home gateways, monitoring is often overlooked: the home user can only reboot the gateway when there is a problem. In next-generation home gateways, more services will be available (pay-per-view TV, games...) and different actors will provide them. When one service fails, it will be impossible to reboot the gateway without disturbing the other services. We propose a management framework that monitors remote gateways. The framework tests response times for various management activities on the gateway, and provides reference time/performance ratios. The values can be used to establish a management schedule that balances the rate at which queries can be performed with the resulting load that the query will induce locally on the gateway. This allows the manager to tune the ratio between the reactivity of monitoring and its intrusiveness on performance
[37] Thomas Goldschmidt. Towards an incremental update approach for concrete textual syntaxes for UUID-based model repositories. In Proceedings of the 1st International Conference on Software Language Engineering (SLE), Dragan Gasevic, Ralf Lämmel, and Eric van Wyk, editors, 2008, volume 5452 of Lecture Notes in Computer Science, pages 168-177. Springer-Verlag Berlin Heidelberg. 2008. [ bib | DOI | Abstract ]
Textual concrete syntaxes for models are beneficial for many reasons. They foster usability and productivity because of their fast editing style, their usage of error markers, autocompletion and quick fixes. Several frameworks and tools from different communities for creating concrete textual syntaxes for models emerged during recent years. However, these approaches failed to provide a solution in general. Open issues are incremental parsing and model updating as well as partial and federated views. On the other hand incremental parsing and the handling of abstract syntaxes as leading entities has been solved within the compiler construction communities many years ago. In this short paper we envision an approach for the mapping of concrete textual syntaxes that makes use of the incremental parsing techniques from the compiler construction world. Thus, we circumvent problems that occur when dealing with concrete textual syntaxes in a UUID based environment.
[38] Thomas Goldschmidt, Steffen Becker, and Axel Uhl. Classification of Concrete Textual Syntax Mapping Approaches. In Proceedings of the 4th European Conference on Model Driven Architecture - Foundations and Applications, 2008, volume 5059 of Lecture Notes in Computer Science, pages 169-184. Springer-Verlag Berlin Heidelberg. 2008. [ bib | DOI | .pdf | Abstract ]
Textual concrete syntaxes for models are beneficial for many reasons. They foster usability and productivity because of their fast editing style, their usage of error markers, autocompletion and quick fixes. Furthermore, they can easily be integrated into existing tools such as diff/merge or information interchange through e-mail, wikis or blogs. Several frameworks and tools from different communities for creating concrete textual syntaxes for models emerged during recent years. However, these approaches failed to provide a solution in general. Open issues are incremental parsing and model updating as well as partial and federated views. To determine the capabilities of existing approaches, we provide a classification schema, apply it to these approaches, and identify their deficiencies.
[39] Thomas Goldschmidt and Jens Kuebler. Towards Evaluating Maintainability Within Model-Driven Environments. In Software Engineering 2008, Workshop Modellgetriebene Softwarearchitektur - Evolution, Integration und Migration, 2008. [ bib | .pdf | Abstract ]
Model Driven Software Development (MDSD) has matured over the last few years and is now becoming an established technology. One advantage that is promoted by the MDSD community is the improved maintainability during the systems evolution over conventional development approaches. Compared to code-based development (meta-)models and transformations need to be handled differently when it comes to maintainability assessments. However, a comprehensive analysis of the impact of the model-driven development approach on the maintainability of a software system is still lacking. This paper presents work towards the finding of appropriate approaches and metrics for measuring the maintainability and evolution capabilities of artefacts within model-driven environments. We present our first steps and further ideas on how to tackle this problem.
[40] Thomas Goldschmidt, Ralf Reussner, and Jochen Winzen. A Case Study Evaluation of Maintainability and Performance of Persistency Techniques. In ICSE '08: Proceedings of the 30th international conference on Software engineering, Leipzig, Germany, 2008, pages 401-410. ACM, New York, NY, USA. 2008. [ bib | .pdf | Abstract ]
Efforts for software evolution supersede any other part of the software life cycle. Technological decisions have a major impact on the maintainability, but are not well reflected by existing code or architecture based metrics. The way the persistency of object structures with relational databases is solved affects the maintainability of the overall system. Besides maintainability other quality attributes of the software are of interest, in particular performance metrics. However, a systematic evaluation of the benefits and drawback of different persistency frameworks is lacking. In this paper we systematically evaluate the maintainability and performance of different technological approaches for this mapping. The paper presents a testbed and an evaluation process with specifically designed metrics to evaluate persistency techniques regarding their maintainability and performance. In the second part we present and discuss the results of the case study.
[41] Thomas Goldschmidt and Axel Uhl. Retainment Rules for Model Transformations. In Proceedings of the 1st International Workshop on Model Co-Evolution and Consistency Management, 2008. [ bib | Abstract ]
The goal of the workshop was to exchange ideas and experiences related to Model (Co-)evolution and Consistency Management (MCCM) in the context of Model-Driven Engineering (MDE). Contemporary MDE practices typically include the manipulation and transformation of a large and heterogeneous set of models. This heterogeneity exhibits itself in different guises ranging from notational differences to semantic content-wise variations. These differences need to be carefully managed in order to arrive at a consistent specification that is adaptable to change. This requires a dedicated activity in the development process and a rigourous adoption of techniques such as model differencing, model comparison, model refactoring, model (in)consistency management, model versioning, and model merging. The workshop invited submissions from both academia and industry on these topics, as well as experience reports on the effective management of models, metamodels, and model transformations. We selected ten high-quality contributions out of which we included two as best-papers in the workshop reader. As a result of the high number of participants and the nice mix of backgrounds we were able to debate lively over a number of pertinent questions that challenge our field.
[42] Thomas Goldschmidt and Guido Wachsmuth. Refinement transformation support for QVT Relational transformations. In Proceedings of the 3rd Workshop on Model Driven Software Engineering (MDSE 2008), 2008. [ bib | Abstract ]
Model transformations are a central concept in Model-driven Engineering. Model transformations are defined in model transformation languages. This paper addresses QVT Relations, a high-level declarative model transformation language standardised by the Object Management Group. QVT Relations lacks support for default copy rules. Thus, transformation developers need to define copy rules explicitly. Particular for refinement transformations which copy large parts of a model, this is a tremendous task. In this paper, we propose generic patterns for copy rules in QVT Relations. Based on these patterns, we provide a higher-roder transformation to generate copy rules for a given metamodel. Finally, we explore several ways to derive a refinement transformation from a generated copy transformation.
[43] Wolfgang Hürst and Philipp Merkle. One-handed mobile video browsing. In Proceedings of the 1st international conference on Designing interactive user experiences for TV and video, Silicon Valley, California, USA, 2008, UXTV '08, pages 169-178. ACM, New York, NY, USA. 2008. [ bib | DOI | http ]
[44] Jens Happe. Dependability Metrics, volume 4909 of Lecture Notes in Computer Science, chapter Analytical Performance Metrics, pages 207-218. Springer-Verlag Berlin Heidelberg, 2008. [ bib | .pdf ]
[45] Jens Happe, Holger Friedrich, Steffen Becker, and Ralf H. Reussner. A Pattern-Based Performance Completion for Message-Oriented Middleware. In Proceedings of the 7th International Workshop on Software and Performance (WOSP '08), Princeton, NJ, USA, 2008, pages 165-176. ACM, New York, NY, USA. 2008. [ bib | Abstract ]
Details about the underlying Message-oriented Middleware (MOM) are essential for accurate performance predictions of software systems using message-based communication. The MOM's configuration and usage strongly influence its throughput, resource utilisation and timing behaviour. Prediction models need to reflect these effects and allow software architects to evaluate the performance influence of MOM configured for their needs. Performance completions [31, 32] provide the general concept to include low-level details of execution environments in abstract performance models. In this paper, we extend the Palladio Component Model (PCM) [4] by a performance completion for Message-oriented Middleware. With our extension to the model, software architects can specify and configure message-based communication using a language based on messaging patterns. For performance evaluation, a model-to-model transformation integrates the low-level details of a MOM into the high-level software architecture model. A case study based on the SPECjms2007 Benchmark [1] predicts the performance of message-based communication with an error less than 20 percent.
[46] Sebastian Herold, Holger Klus, Yannick Welsch, Constanze Deiters, Andreas Rausch, Ralf Reussner, Klaus Krogmann, Heiko Koziolek, Raffaela Mirandola, Benjamin Hummel, Michael Meisinger, and Christian Pfaller. The Common Component Modeling Example, volume 5153 of Lecture Notes in Computer Science, chapter CoCoME - The Common Component Modeling Example, pages 16-53. Springer-Verlag Berlin Heidelberg, 2008. [ bib | http | Abstract ]
The example of use which was chosen as the Common Component Modeling Example (CoCoME) and on which the several methods presented in this book should be applied was designed according to the example described by Larman in [1]. The description of this example and its use cases in the current chapter shall be considered under the assumption that this information was delivered by a business company as it could be in the reality. Therefore the specified requirements are potentially incomplete or imprecise.
[47] Benjamin Klatt and Klaus Krogmann. Software Extension Mechanisms. In Proceedings of the Thirteenth International Workshop on Component-Oriented Programming (WCOP'08), Karlsruhe, Germany, Ralf Reussner, Clemens Szyperski, and Wolfgang Weck, editors, 2008, number 2008-12 in Interner Bereich Universität Karlsruhe (TH), pages 11-18. [ bib | .pdf | Abstract ]
Industrial software projects not only have to deal with the number of features in the software system. Also issues like quality, flexibility, reusability, extensibility, developer and user acceptance are key factors in these days. An architecture paradigm targeting those issues are extension mechanisms which are for example used by component frameworks. The main contribution of this paper is to identify software extension mechanism characteristics derived from state-of-the-art software frameworks. These identified characteristics will benefit developers with selecting and creating extension mechanisms.
[48] Heiko Koziolek. Dependability Metrics, volume 4909 of Lecture Notes in Computer Science, chapter Goal, Question, Metric, pages 39-42. Springer-Verlag Berlin Heidelberg, 2008. [ bib | .pdf | Abstract ]
This chapter gives an overview over the Goal-Question-Metric (GQM) approach, a way to derive and select metrics for a particular task in a top-down and goal-oriented fashion.
[49] Heiko Koziolek. Dependability Metrics, volume 4909 of Lecture Notes in Computer Science, chapter Introduction to Performance Metrics, pages 199-203. Springer-Verlag Berlin Heidelberg, 2008. [ bib | .pdf | Abstract ]
This chapter defines simple performance metrics and gives an outlook over the remaining chapters of Part IV.
[50] Heiko Koziolek. Parameter Dependencies for Reusable Performance Specifications of Software Components. PhD thesis, University of Oldenburg, 2008. [ bib | .pdf | Abstract ]
The following introduction will motivate the need for a new modelling method for component-based performance engineering (Chapter 1.1) and then describe the specific problem tackled in this thesis in detail (Chapter 1.2). Afterwards, it will point out the deficits of existing solution approaches to this problem (Chapter 1.3), before it lists the scientific contributions of this thesis (Chapter 1.4). Finally, the introduction will sketch the experimental validation conducted for this thesis (Chapter 1.5).
[51] Heiko Koziolek. Parameter Dependencies for Reusable Performance Specifications of Software Components, volume 2 of The Karlsruhe Series on Software Design and Quality. Universitätsverlag Karlsruhe, 2008. [ bib ]
[52] Heiko Koziolek and Jens Happe. Dependability Metrics, volume 4909 of Lecture Notes in Computer Science, chapter Performance Metrics for Specific Domains, pages 233-240. Springer-Verlag Berlin Heidelberg, 2008. [ bib | .pdf | Abstract ]
Some performance metrics are specific for certain domains or are used differently under different circumstances. In the following, performance metrics for Internet-based systems and embedded systems will be described.
[53] Heiko Koziolek and Ralf Reussner. A Model Transformation from the Palladio Component Model to Layered Queueing Networks. In Performance Evaluation: Metrics, Models and Benchmarks, SIPEW 2008, 2008, volume 5119 of Lecture Notes in Computer Science, pages 58-78. Springer-Verlag Berlin Heidelberg. 2008. [ bib | .pdf | Abstract ]
For component-based performance engineering, software component developers individually create performance specifications of their components. Software architects compose these specifications to architectural models. This enables assessing the possible fulfilment of performance requirements without the need to purchase and deploy the component implementations. Many existing performance models do not support component-based performance engineering but offer efficient solvers. On the other hand, component-based performance engineering approaches often lack tool support. We present a model transformation combining the advanced component concepts of the Palladio Component Model (PCM) with the efficient performance solvers of Layered Queueing Networks (LQN). Joining the tool-set for PCM specifications with the tool-set for LQN solution is an important step to carry component-based performance engineering into industrial practice.We validate the correctness of the transformation by mapping the PCM model of a componentbased architecture to an LQN and conduct performance predictions.
[54] Klaus Krogmann and Ralf H. Reussner. The Common Component Modeling Example, volume 5153 of Lecture Notes in Computer Science, chapter Palladio: Prediction of Performance Properties, pages 297-326. Springer-Verlag Berlin Heidelberg, 2008. [ bib | http | Abstract ]
Palladio is a component modelling approach with a focus on performance (i.e. response time, throughput, resource utilisation) analysis to enable early design-time evaluation of software architectures. It targets modelling business information systems. The Palladio approach includes a meta-model called Palladio Component Model for structural views, component behaviour specifications, resource environment, component allocation and the modelling of system usage and multiple analysis techniques ranging from process algebra analysis to discrete event simulation. Additionally, the Palladio approach is aligned with a development process model tailored for component-based software systems. Early design-time predictions avoid costly redesigns and reimplementation. Palladio enables software architects to analyse different architectural design alternatives supporting their design decisions with quantitative performance predictions, provided with the Palladio approach.
[55] Michael Kuperberg. Dependability Metrics, volume 4909 of Lecture Notes in Computer Science, chapter Markov Models, pages 48-55. Springer-Verlag Berlin Heidelberg, 2008. [ bib | .pdf | Abstract ]
This chapter gives a brief overview overMarkov models, a useful formalism to analyse stochastic systems.
[56] Michael Kuperberg, Martin Krogmann, and Ralf Reussner. ByCounter: Portable Runtime Counting of Bytecode Instructions and Method Invocations. In Proceedings of the 3rd International Workshop on Bytecode Semantics, Verification, Analysis and Transformation, Budapest, Hungary, 5th April 2008 (ETAPS 2008, 11th European Joint Conferences on Theory and Practice of Software), 2008. [ bib | .pdf | Abstract ]
For bytecode-based applications, runtime instruction counts can be used as a platform- independent application execution metric, and also can serve as the basis for bytecode-based performance prediction. However, different instruction types have different execution durations, so they must be counted separately, and method invocations should be identified and counted because of their substantial contribution to the total application performance. For Java bytecode, most JVMs and profilers do not provide such functionality at all, and existing bytecode analysis frameworks require expensive JVM instrumentation for instruction-level counting. In this paper, we present ByCounter, a lightweight approach for exact runtime counting of executed bytecode instructions and method invocations. ByCounter significantly reduces total counting costs by instrumenting only the application bytecode and not the JVM, and it can be used without modifications on any JVM. We evaluate the presented approach by successfully applying it to multiple Java applications on different JVMs, and discuss the runtime costs of applying ByCounter to these cases.
[57] Volker Kuttruff, Mircea Trifu, and Peter Szulman. Von der problemerkennung zur problembehebung: 12 jahre softwaresanierung am fzi. In GI Lecture Notes in Informatics, 2008, volume 126, pages 35-50. GI, Köllen Verlag, Bonn. 2008. [ bib ]
[58] Anne Martens, Steffen Becker, Heiko Koziolek, and Ralf Reussner. An empirical investigation of the applicability of a component-based performance prediction method. In Proceedings of the 5th European Performance Engineering Workshop (EPEW'08), Palma de Mallorca, Spain, N. Thomas and C. Juiz, editors, 2008, volume 5261 of Lecture Notes in Computer Science, pages 17-31. Springer-Verlag Berlin Heidelberg. 2008. [ bib | DOI | .pdf | Abstract ]
Component-based software performance engineering (CBSPE) methods shall enable software architects to assess the expected response times, throughputs, and resource utilization of their systems already during design. This avoids the violation of performance requirements. Existing approaches for CBSPE either lack tool support or rely on prototypical tools, who have only been applied by their authors. Therefore, industrial applicability of these methods is unknown. On this behalf, we have conducted a controlled experiment involving 19 computer science students, who analysed the performance of two component-based designs using our Palladio performance prediction approach, as an example for a CBSPE method. Our study is the first of its type in this area and shall help to mature CBSPE to industrial applicability. In this paper, we report on results concerning the prediction accuracy achieved by the students and list several lessons learned, which are also relevant for other methods than Palladio.
[59] Anne Martens, Steffen Becker, Heiko Koziolek, and Ralf Reussner. An empirical investigation of the effort of creating reusable models for performance prediction. In Proceedings of the 11th International Symposium on Component-Based Software Engineering (CBSE'08), Karlsruhe, Germany, 2008, volume 5282 of Lecture Notes in Computer Science, pages 16-31. Springer-Verlag Berlin Heidelberg. 2008. [ bib | DOI | .pdf | Abstract ]
Model-based performance prediction methods aim at evaluating the expected response time, throughput, and resource utilisation of a software system at design time, before implementation. Existing performance prediction methods use monolithic, throw-away prediction models or component-based, reusable prediction models. While it is intuitively clear that the development of reusable models requires more effort, the actual higher amount of effort has not been quantified or analysed systematically yet. To study the effort, we conducted a controlled experiment with 19 computer science students who predicted the performance of two example systems applying an established, monolithic method (Software Performance Engineering) as well as our own component-based method (Palladio). The results show that the effort of model creation with Palladio is approximately 1.25 times higher than with SPE in our experimental setting, with the resulting models having comparable prediction accuracy. Therefore, in some cases, the creation of reusable prediction models can already be justified, if they are reused at least once.
[60] Anne Martens and Heiko Koziolek. Performance-oriented design space exploration. In Proceedings of the Thirteenth International Workshop on Component-Oriented Programming (WCOP'08), Karlsruhe, Germany, 2008, Interner Bericht / Universität Karlsruhe, Fakultät für Informatik ; 2008,12, pages 25-32. [ bib | .pdf | Abstract ]
Architectural models of component-based software systems are evaluated for functional properties and/or extrafunctional properties (e.g. by doing performance predictions). However, after getting the results of the evaluations and recognising that requirements are not met, most existing approaches leave the software architect alone with finding new alternatives to her current design (e.g. by changing the selection of components, the configuration of components and containers, the sizing). We propose a novel approach to automatically generate and assess performance-improving design alternatives for componentbased software systems based on performance analyses of the software architecture. First, the design space spanned by different design options (e.g. available components, configuration options) is systematically explored using metaheuristic search techniques. Second, new architecture candidates are generated based on detecting anti-patterns in the initial architecture. Using this approach, the design of a high-quality component-based software system is eased for the software architect. First, she needs less manual effort to find good design alternatives. Second, good design alternatives can be uncovered that the software architect herself would have overlooked.
[61] Philipp Merkle. Einhändiges, Finger-basiertes Video-Browsing auf mobilen Handcomputern. Bachelor thesis, Albert-Ludwigs-Universität Freiburg, 2008. [ bib | .pdf ]
[62] Christof Momm, Thomas Detsch, and Sebastian Abeck. Model-driven instrumentation for monitoring the quality of web service compositions. In EDOC 2008 Workshop on Advances in Quality of Service Management (AQuSerM 08), 2008. IEEE Computer Society Press, Munich, Germany. 2008. [ bib | Abstract ]
Supporting business services through Web service compositions (WSC) as part of service-oriented architectures involves various runtime monitoring requirements. The implementation of these requirements results in additional development activities. Due to the lack of standards for treating such WSC monitoring concerns, a corresponding development approach has to deal with a variety of specific technologies. This paper therefore introduces a platform-independent approach to the instrumentation of WSC and the generation of an effective monitoring infrastructure based on the principles of model-driven software development (MDSD).
[63] Christof Momm, Thomas Detsch, Michael Gebhart, and Sebastian Abeck. Model-driven development of monitored web service compositions. In 15th HP-SUA Workshop, 2008. Marrakesh, Maroc. [ bib ]
[64] Pierre Parrend and Stéphane Frénot. Vérification automatique pour l'exécution sécurisée de composants java. Numéro spécial de la revue L'Objet - Composants, Services et Aspects : techniques et outils pour la vérification, 2008. [ bib ]
[65] Ralf Reussner and Viktoria Firus. Dependability Metrics, volume 4909 of Lecture Notes in Computer Science, chapter Basic and Dependent Metrics, pages 37-38. Springer-Verlag Berlin Heidelberg, 2008. [ bib | .pdf ]
[66] Ralf Reussner and Viktoria Firus. Dependability Metrics, volume 4909 of Lecture Notes in Computer Science, chapter Introduction to Overlapping Attributes, pages 243-244. Springer-Verlag Berlin Heidelberg, 2008. [ bib | .pdf ]
[67] Ralf H. Reussner. Gefördert ab dem Ersten Semester: die Fakultät für Informatik am KIT bietet Stipendien für Begabte aus einkommensschwachen Familien, 2008. [ bib ]
[68] Antonino Sabetta and Heiko Koziolek. Dependability Metrics, volume 4909 of Lecture Notes in Computer Science, chapter Performance Metrics in Software Design Models, pages 219-225. Springer-Verlag Berlin Heidelberg, 2008. [ bib | .pdf ]
[69] Antonino Sabetta and Heiko Koziolek. Dependability Metrics, volume 4909 of Lecture Notes in Computer Science, chapter Measuring Performance Metrics: Techniques and Tools, pages 226-232. Springer-Verlag Berlin Heidelberg, 2008. [ bib | .pdf | Abstract ]
This chapter describes techniques for characterising workloads, which is a prerequisite for obtaining performance measurements in realistic settings, and presents an overview on performance measurement tools such as benchmarks, monitors, and load drivers.
[70] Wolfgang Weck, Ralf H. Reussner, and Clemens Szyperski. Component-Oriented Programming: Report on the 12th Workshop WCOP at ECOOP 2007. Technical Report Volume 4906/2008, University of Karlsruhe, 2008. [ bib | DOI | http | Abstract ]
This report covers the twelfth Workshop on Component-Oriented Programming (WCOP). WCOP has been affiliated with ECOOP since its inception in 1996. The report summarizes the contributions made by authors of accepted position papers as well as those made by all attendees of the workshop sessions.
[71] Steffen Becker, Frantisek Plasil, and Ralf Reussner, editors. Quality of Software Architectures. Models and Architectures, 4th International Conference on the Quality of Software-Architectures, QoSA 2008, Karlsruhe, Germany, October 14-17, 2008. Proceedings, volume 5281 of Lecture Notes in Computer Science. Springer-Verlag Berlin Heidelberg, 2008. [ bib ]
[72] Irene Eusgeld, Felix C. Freiling, and Ralf Reussner, editors. Dependability Metrics, volume 4909 of Lecture Notes in Computer Science. Springer-Verlag Berlin Heidelberg, 2008. [ bib | DOI | http ]
[73] Andreas Rausch, Ralf Reussner, Raffaela Mirandola, and FrantisekPlasil, editors. The Common Component Modeling Example: Comparing Software Component Models, volume 5153 of Lecture Notes in Computer Science. Springer-Verlag Berlin Heidelberg, 2008. [ bib | http ]

 TOP

Publications 2007

[1] Yvan Royon, Pierre Parrend, Stéphane Frénot, Serafeim Papastefano, Humberto Abdelnur, and Dirk Van de Poel. Multi-service, multi-protocol management for residential gateways. In BroadBand Europe, December 2007. [ bib | http ]
[2] Christoph Rathfelder and Henning Groenda. Geschäftsprozessorientierte Kategorisierung von SOA. In 2. Workshop Bewertungsaspekte serviceorientierter Architekturen, Karlsruhe, Germany, November 13, 2007, pages 11-22. SHAKER Verlag. November 2007. [ bib | .pdf | Abstract ]
Service-Orientierte Architekturen (SOAs) versprechen eine bessere Unterstützung von Geschäftsprozessen. Es gibt jedoch unterschiedliche Interpretationen darüber, was eine Service-Orientierte Architektur (SOA) ist. Da die Verbesserung der Geschäftsprozessunterstützung eines der häufigsten Argumente für SOAs ist, bietet es sich an, die verschiedenen SOA-Varianten nach der damit ermöglichten Prozessunterstützung zu kategorisieren. Bisherige Ansätze zur Kategorisierung sind in vielen Fällen auf bestimmte Technologien oder Standards beschränkt und gehen nur am Rand auf die gegebene Prozessunterstützung ein. In diesem Artikel wird eine solche geschäftsprozessorientierte Kategorisierung von SOAs präsentiert.
[3] Pierre Parrend, Samuel Galice, Stéphane Frénot, and Stéphane Ubeda. Identity-based cryptosystems for enhanced deployment of OSGi bundles. In IARIA International Conference on Emerging Security Information, Systemsand Technologies (SecurWare), October 2007. [ bib | http ]
[4] Klaus Krogmann. Reengineering of Software Component Models to Enable Architectural Quality of Service Predictions. In Proceedings of the 12th International Workshop on Component Oriented Programming (WCOP 2007), Ralf H. Reussner, Clemens Szyperski, and Wolfgang Weck, editors, July 31 2007, volume 2007-13 of Interne Berichte, pages 23-29. Universität Karlsruhe (TH), Karlsruhe, Germany. July 31 2007. [ bib | http | Abstract ]
In this paper, we propose to relate model-based adaptation approaches with the Windows Workflow Foundation (WF) implementation platform, through a simple case study. We successively introduce a client/server system with mismatching components implemented in WF, our formal approach to work mismatch cases out, and the resulting WF adaptor. We end with some conclusions and a list of open issues.
[5] Heiko Koziolek, Steffen Becker, and Jens Happe. Predicting the Performance of Component-based Software Architectures with different Usage Profiles. In Proc. 3rd International Conference on the Quality of Software Architectures (QoSA'07), July 2007, volume 4880 of Lecture Notes in Computer Science, pages 145-163. Springer-Verlag Berlin Heidelberg. July 2007. [ bib | .pdf | Abstract ]
Performance predictions aim at increasing the quality of software architectures during design time. To enable such predictions, specifications of the performance properties of individual components within the architecture are required. However, the response times of a component might depend on its configuration in a specific setting and the data send to or retrieved from it. Many existing prediction approaches for component-based systems neglect these influences. This paper introduces extensions to a performance specification language for components, the Palladio Component Model, to model these influences. The model enables to predict response times of different architectural alternatives. A case study on a component-based architecture for a web portal validates the approach and shows that it is capable of supporting a design decision in this scenario.
[6] Michael Kuperberg and Steffen Becker. Predicting Software Component Performance: On the Relevance of Parameters for Benchmarking Bytecode and APIs. In Proceedings of the 12th International Workshop on Component Oriented Programming (WCOP 2007), Ralf Reussner, Clemens Czyperski, and Wolfgang Weck, editors, July 2007. [ bib | .pdf | Abstract ]
Performance prediction of component-based software systems is needed for systematic evaluation of design decisions, but also when an application�s execution system is changed. Often, the entire application cannot be benchmarked in advance on its new execution system due to high costs or because some required services cannot be provided there. In this case, performance of bytecode instructions or other atomic building blocks of components can be used for performance prediction. However, the performance of bytecode instructions depends not only on the execution system they use, but also on their parameters, which are not considered by most existing research. In this paper, we demonstrate that parameters cannot be ignored when considering Java bytecode. Consequently, we outline a suitable benchmarking approach and the accompanying challenges.
[7] Christof Momm, Christian Mayerl, Christoph Rathfelder, and Sebastian Abeck. A Manageability Infrastructure for the Monitoring of Web Service. In Proceedings of the 14th Annual Workshop of HP Software University Association, H. G. Hegering, H. Reiser, M. Schiffers, and Th. Nebe, editors, Leibniz Computing Center and Munich Network Management Team, Germany, July 8-11, 2007, pages 103-114. Infonomies Consulting, Stuttgart, Germany. July 2007. [ bib | .pdf | Abstract ]
The management of web service composition, where the employed atomic web services as well as the compositions themselves are offered on basis of Service Level Agreements (SLA), implies new requirements for the management infrastructure. In this paper we introduce the conceptual design and implementation for a standard-based and flexible manageability infrastructure offering comprehensive management information for an SLAdriven management of web service compositions. Our solution thereby is based on well-understood methodologies and standards from the area of application and web service management, in particular the WBEM standards.
[8] Pierre Parrend, Stéphane Frénot, and Sebastian Hoehn. Privacy-aware service integration. In Second IEEE International Workshop on Services Integration in Pervasive Environments (SIPE'2007), July 2007. [ bib | http | Abstract ]
Privacy mechanisms exist for monolithic systems. However, pervasive environments that gather user data to support advanced services provide little control over the data an individual releases. This is a strong inhibitor for the development of pervasive systems, since most users do not accept that their personal information is sent out to the wild, but potentially passed over to third party systems. We therefore propose a framework to support user control over the data made available to service providers in the context of an OSGi based Extensible Service Systems. A formal privacy model is defined and service and policy descriptions are deduced. Technical system requirements to support these policies are identified. Since guaranteeing privacy inside the system is of little help if any malicious entity can break into it, a security architecture for OSGi based Extensible Service Systems is also defined.
[9] Lucia Kapova and Petr Hnetynka. Model-driven Development of Service Oriented Architectures. In 16th Annual Conference of Doctoral Students, June 5-8 2007, Proceedings of the 16th Annual Conference of Doctoral Students - WDS 2007. MATFYZPRESS, Prague. June 5-8 2007. [ bib | http ]
[10] Pierre Parrend and Stéphane Frénot. Java components vulnerabilities - an experimental classification targeted at the OSGi platform. Research Report RR-6231, INRIA, June 2007. [ bib | http | Abstract ]
The OSGi Platform finds a growing interest in two different applications domains: embedded systems, and applications servers. However, the security properties of this platform are hardly studied, which is likely to hinder its use in production systems. This is all the more important that the dynamic aspect of OSGi-based applications, that can be extended at runtime, make them vulnerable to malicious code injection. We therefore perform a systematic audit of the OSGi platform so as to build a vulnerability catalog that intends to reference OSGi Vulnerabilities originating in the Core Specification, and in behaviors related to the use of the Java language. Standard Services are not considered. To support this audit, a Semi-formal Vulnerability Pattern is defined, that enables to uniquely characterize fundamental properties for each vulnerability, to include verbose description in the pattern, to reference known security protections, and to track the implementation status of the proof-of-concept OSGi Bundles that exploit the vulnerability. Based on the analysis of the catalog, a robust OSGi Platform is built, and recommendations are made to enhance the OSGi Specifications.
[11] Pierre Parrend and Stéphane Frénot. Supporting the secure deployment of OSGi bundles. In First IEEE WoWMoM Workshop on Adaptive and DependAble Mission- and bUsiness-critical mobile Systems, Helsinki, Finland, June 2007. [ bib | http | Abstract ]
The OSGi platform is a lightweight management layer over a Java virtual machine that makes runtime extensibility and multi-application support possible in mobile and constraint environments. This powerfull capability opens a particular attack vector against mobile platforms: the installation of malicious OSGi bundles. The first countermeasure is the digital signature of the bundles. We developed a tool suite that supports the signature, the publication and the validation of the bundles in an OSGi framework. Our tools support the publication of bundles onto a remote bundle repository as well as the validation of the signature according to the OSGi R4 specifications. A comparison of existing validation mechanisms shows that our security layer is the only one that is compliant with the specification.
[12] Klaus Krogmann. Reengineering von Software-Komponenten zur Vorhersage von Dienstgüte-Eigenschaften. In WSR2007, Rainer Gimnich and Andreas Winter, editors, May 2007, number 1/2007 in Mainzer Informatik-Berichte. Bad Honnef. [ bib | .pdf | Abstract ]
Die Verwendung von Komponenten ist ein anerkanntes Prinzip in der Software-Entwicklung. Dabei werden Software-Komponenten zumeist als Black-Boxes aufgefasst , deren Interna vor einem Komponenten-Verwender verborgen sind. Zahlreiche Architektur-Analyse-Verfahren, insbesondere solche zur Vorhersage von nicht-funktionalen Eigenschaften, benötigen jedoch Informationen über Interna (bspw. die Anzahl abgearbeiteter Schleifen oder Aufrufe externer Dienste), die von den vielen Komponentenmodellen nicht angeboten werden. Für Forscher, die aktuell mit der Analyse nicht-funktionaler Eigenschaften von komponentenbasierten Software-Architekturen beschäftigt sind, stellt sich die Frage, wie sie an dieses Wissen über Komponenten-Interna gelangen. Dabei müssen existierende Software-Komponenten analysiert werden, um die benötigten Informationen über das Innere der Komponenten derart zu rekonstruieren, dass sie für anschlie"sende Analyse-Verfahren nicht-funktionaler Eigenschaften genutzt werden können. Bestehende Verfahren konzentrieren sich auf die Erkennung von Komponenten oder bspw. das Reengineering von Sequenzdiagrammen gegebener Komponenten, fokussieren aber nicht auf die Informationen, die von Vorhersageverfahren für nicht-funktionale Eigenschaften benötigt werden. Der Beitrag dieses Papiers ist eine genaue Betrachtung der Informationen, die das Reengineering von Komponenten-Interna liefern muss, um für die Vorhersage der nicht-funktionalen Eigenschaft Performanz (im Sinne von Antwortzeit) nutzbringend zu sein. Dazu wird das Palladio Komponentenmodell [?] vorgestellt, das genau für diese Informationen vorbereitet ist. Schließlich wird ein Reengineering-Ansatz vorgestellt, der dazu geeignet ist, die benötigten Informationen zu gewinnen.
[13] Klaus Krogmann. Reengineering von Software-Komponenten zur Vorhersage von Dienstgüte-Eigenschaften. Softwaretechnik-Trends, 27(2):44-45, May 2007. [ bib | .pdf | Abstract ]
Die Verwendung von Komponenten ist ein anerkanntes Prinzip in der Software-Entwicklung. Dabei werden Software-Komponenten zumeist als Black-Boxes aufgefasst , deren Interna vor einem Komponenten-Verwender verborgen sind. Zahlreiche Architektur-Analyse-Verfahren, insbesondere solche zur Vorhersage von nicht-funktionalen Eigenschaften, benötigen jedoch Informationen über Interna (bspw. die Anzahl abgearbeiteter Schleifen oder Aufrufe externer Dienste), die von den vielen Komponentenmodellen nicht angeboten werden. Für Forscher, die aktuell mit der Analyse nicht-funktionaler Eigenschaften von komponentenbasierten Software-Architekturen beschäftigt sind, stellt sich die Frage, wie sie an dieses Wissen über Komponenten-Interna gelangen. Dabei müssen existierende Software-Komponenten analysiert werden, um die benötigten Informationen über das Innere der Komponenten derart zu rekonstruieren, dass sie für anschlie"sende Analyse-Verfahren nicht-funktionaler Eigenschaften genutzt werden können. Bestehende Verfahren konzentrieren sich auf die Erkennung von Komponenten oder bspw. das Reengineering von Sequenzdiagrammen gegebener Komponenten, fokussieren aber nicht auf die Informationen, die von Vorhersageverfahren für nicht-funktionale Eigenschaften benötigt werden. Der Beitrag dieses Papiers ist eine genaue Betrachtung der Informationen, die das Reengineering von Komponenten-Interna liefern muss, um für die Vorhersage der nicht-funktionalen Eigenschaft Performanz (im Sinne von Antwortzeit) nutzbringend zu sein. Dazu wird das Palladio Komponentenmodell [?] vorgestellt, das genau für diese Informationen vorbereitet ist. Schließlich wird ein Reengineering-Ansatz vorgestellt, der dazu geeignet ist, die benötigten Informationen zu gewinnen.
[14] Christoph Rathfelder. Management in serviceorientierten Architekturen: Eine Managementinfrastruktur für die Überwachung komponierter Webservices. VDM Verlag Dr. Müller, Saarbrücken, Germany, April 2007. [ bib ]
[15] Klaus Krogmann and Steffen Becker. A Case Study on Model-Driven and Conventional Software Development: The Palladio Editor. In Software Engineering 2007 - Beiträge zu den Workshops, Wolf-Gideon Bleek, Henning Schwentner, and Heinz Züllighoven, editors, March 27 2007, volume 106 of Lecture Notes in Informatics, pages 169-176. Series of the Gesellschaft für Informatik (GI). March 27 2007. [ bib | .pdf | Abstract ]
The actual benefits of model-driven approaches compared to code-centric development have not been systematically investigated. This paper presents a case study in which functional identical software was once developed in a code-centric, conventional style and once using Eclipse-based model-driven development tools. In our specific case, the model-driven approach could be carried in 11% of the time of the conventional approach, while simultaneously improving code quality.
[16] Steffen Becker, Heiko Koziolek, and Ralf H. Reussner. Model-based Performance Prediction with the Palladio Component Model. In WOSP '07: Proceedings of the 6th International Workshop on Software and performance, Buenes Aires, Argentina, February 5-8 2007, pages 54-65. ACM, New York, NY, USA. February 5-8 2007. [ bib | DOI | .pdf | Abstract ]
One aim of component-based software engineering (CBSE) is to enable the prediction of extra-functional properties, such as performance and reliability, utilising a well-defined composition theory. Nowadays, such theories and their accompanying prediction methods are still in a maturation stage. Several factors influencing extra-functional properties need additional research to be understood. A special problem in CBSE stems from its specific development process: Software components should be specified and implemented independent from their later context to enable reuse. Thus, extra-functional properties of components need to be specified in a parametric way to take different influence factors like the hardware platform or the usage profile into account. In our approach, we use the Palladio Component Model (PCM) to specify component-based software architectures in a parametric way. This model offers direct support of the CBSE development process by dividing the model creation among the developer roles. In this paper, we present our model and a simulation tool based on it, which is capable of making performance predictions. Within a case study, we show that the resulting prediction accuracy can be sufficient to support the evaluation of architectural design decisions.
[17] Andreas Rentschler. Integration eines übergeordneten Entwurfswerkzeugs für eingebettete HW/SW-Systeme. Pre-diploma thesis, University of Karlsruhe, Germany, February 2007. [ bib | .pdf ]
[18] Steffen Becker, Tobias Dencker, Jens Happe, Heiko Koziolek, Klaus Krogmann, Martin Krogmann, Michael Kuperberg, Ralf Reussner, Martin Sygo, and Nikola Veber. Software-entwicklung mit eclipse. Technical report, Fakultät für Informatik, Universität Karlsruhe, Karlsruhe, Germany, 2007. Interner Bericht. [ bib | http | Abstract ]
Die Entwicklung von Software mit Hilfe von Eclipse gehört heute zu den Standard-Aufgaben eines Software-Entwicklers. Die Artikel in diesem technischen Bericht beschäftigen sich mit den umfangreichen Möglichkeiten des Eclipse-Frameworks, die nicht zuletzt auf Grund zahlreicher Erweiterungsmöglichkeiten mittels Plugins möglich sind. Dieser technische Bericht entstand aus einem Proseminar im Wintersemester 2006/2007.
[19] Steffen Becker, Thomas Goldschmidt, Henning Groenda, Jens Happe, Henning Jacobs, Christian Janz, Konrad Jünemann, Benjamin Klatt, Christopher Köker, and Heiko Koziolek. Transformationen in der modellgetriebenen software-entwicklung. Technical report, Fakultät für Informatik, Universität Karlsruhe, Karlsruhe, 2007. Interner Bericht. [ bib | http | Abstract ]
Software-Architekturen lassen sich durch Modell beschreiben. Sie sind weder auf eine Beschreibungssprache noch auf eine bestimmte Domänen beschränkt. Im Zuge der Bemühungen modellgetriebener Entwicklung lassen sich hier Entwicklungen hin zu standardisierten Beschreibungssprachen wie UML aber auch die Einführung von domänen-spezifischen Sprachen (DSL) erkennen. Auf diese formalisierten Modelle lassen sich schließlich Transformationen anwenden. Diese können entweder zu einem weiteren Modell ("Model-to-Model") oder einer textuellen Repräsentation ("Model-to-Text") erfolgen. Transformationen kapseln dabei wiederholt anwendbares Entwurfs-Wissen ("Muster") in parametrisierten Schablonen. Für die Definition der Transformationen können Sprachen wie beispielsweise QVT verwendet werden. Mit AndoMDA und openArchitectureWare existieren Werkzeuge, welche die Entwickler bei der Ausführung von Transformationen unterstützen.
[20] Steffen Becker, Thomas Goldschmidt, Boris Gruschko, and Heiko Koziolek. A Process Model and Classification Scheme for Semi-Automatic Meta-Model Evolution. In Proc. 1st Workshop MDD, SOA und IT-Management (MSI'07), 2007, pages 35-46. GiTO-Verlag. 2007. [ bib | .pdf | Abstract ]
Abstract: Model Driven Software Development (MDSD) has matured over the last few years and is now becoming an established technology. As a consequence, dealing with evolving meta-models and the necessary migration activities of instances of this meta-model is becoming increasingly important. Approaches from database schema migration tackle a similar problem, but cannot be adapted easily to MDSD. This paper presents work towards a solution in the model-driven domain. Firstly, we introduce a process model, which defines the necessary steps to migrate model instances upon an evolving meta-model. Secondly, we have created an initial classification of metamodel changes in EMF/Ecore utilised by our process model
[21] Steffen Becker, Jens Happe, Heiko Koziolek, Klaus Krogmann, Michael Kuperberg, Ralf Reussner, Sebastian Reichelt, Erik Burger, Igor Goussev, and Dimitar Hodzhev. Software-komponentenmodelle. Technical report, Fakultät für Informatik, Universität Karlsruhe, Karlsruhe, 2007. Interner Bericht. [ bib | http | Abstract ]
In der Welt der komponentenbasierten Software-Entwicklung werden Komponentenmodelle unter Anderem dazu eingesetzt, Software-Systeme mit vorhersagbaren Eigenschaften zu erstellen. Die Bandbreite reicht von Forschungs- bis zu Industrie-Modellen. In Abhängigkeit von den Zielen der Modelle werden unterschiedliche Aspekte von Software in ein Komponentenmodell abgebildet. In diesem technischen Bericht wird ein überblick über die heute verfügbaren Software-Komponentenmodelle vermittelt.
[22] C. Emig, K. Krutz, S. Link, C. Momm, and S. Abeck. Model-driven development of soa services. Technical Report C&M Research Report, Universität Karlsruhe (TH), 2007. [ bib | Abstract ]
Service-oriented architectures (SOA) will form the basis of future information systems. Basic web services are being assembled to composite web services in order to directly support business processes. As some basic web services can be used in several composite web services, different business processes are influenced if for example a web service is unavailable or if its signature changes. Yet the range of such a change is often ambiguous due to a missing overall SOA service model pointing out the influence of services on business processes. In this paper we present a SOA service model defined as a UML-based metamodel and its integration into a model-driven service development approach. In contrary to existing approaches we explicitly address deployment issues
[23] T. Goldschmidt, J. Winzen, and R. Reussner. Evaluation of Maintainability of Model-driven Persistency Techniques. In IEEE CSMR 07 - Workshop on Model-Driven Software Evolution (MoDSE2007), 2007, pages 17-24. [ bib | .pdf | Abstract ]
Although the original OMG Model-Driven Architecture Approach is not concerned with software evolution, modeldriven techniques may be good candidates to ease software evolution. However, a systematic evaluation of the benefits and drawback of model-driven approaches compared to other approaches are lacking. Besides maintainability other quality attributes of the software are of interest, in particular performance metrics. One specific area where model driven approaches are established in the area of software evolution are the generation of adapters to persist modern object oriented business models with legacy software and databases. This paper presents a testbed and an evaluation process with specifically designed metrics to evaluate model-driven techniques regarding their maintainability and performancerouven against established persistency frameworks.
[24] Henning Groenda. Entwicklung und Nutzung von Scheduling-Modellen für die Performance-Vorhersage komponentenbasierter Software-Architekturen. Master's thesis, Universität Karlsruhe (TH), 2007. [ bib ]
[25] Jens Happe. Towards a Model of Fair and Unfair Semaphores in MoDeST. In Proceedings of the 6th Workshop on Process Algebra and Stochastically Timed Activities, 2007, pages 51-55. [ bib | .pdf | Abstract ]
Synchronisation and communication of concurrent processes can have a strong influence on their performance, e.g. throughput and response time. The selection policy of waiting processes is usually not described in performance prediction methods such as stochastic process algebras and stochastic Petri nets, but plays a major role for the response time of real software systems. In this paper, we demonstrate how different selection policies of Java semaphores can be modelled using the specification language MoDeST. In a case study, we first compare the performance of Java�s fair and unfair semaphores. Based on the results, we create an initial behavioural model of semaphores in MoDeST. Finally, a comparison of measurements and predictions shows that the operating system�s scheduler strongly influences performance and thus has to be modelled as well.
[26] Anne Martens. Empirical Validation of the Model-driven Performance Prediction Approach Palladio. Master's thesis, Carl-von-Ossietzky Universität Oldenburg, 2007. one version with appendix, one short version under '.pdf' links. [ bib | .pdf | .pdf | Abstract ]
To estimate the consequences of design decisions is a crucial element of an engineering discipline. Model-based performance prediction approaches target the estimation of a system's performance at design time. Next to accuracy, the approaches also need to be validated for their applicability to be usable in practice. The applicability of the model-based performance prediction approach Palladio was never validated before. Previous case studies validating Palladio were concerned with the accuracy of the predictions in comparison to measurements. In this thesis, I empirically validated the applicability of Palladio and, for comparison, of the well-known performance prediction approach SPE. While Palladio has the notion of a component, which leads to reusable prediction models, SPE makes not use of any componentisation of the system to be analysed. For the empirical validation, I conducted an empirical study with 19 computer science students. The study was designed as a controlled experiment to achieve a high validity of the results. The results showed that both approaches were applicable, although both have specific problems. Furthermore, it was found that the duration of conducting a prediction using Palladio was significantly higher than duration using SPE, however, the influence of potential reuse of the Palladio models was excluded by the experiment design.
[27] Christian Mayerl, Kai Moritz Hüner, Jens-Uwe Gaspar, Christof Momm, and Sebastian Abeck. Definition of metric dependencies for monitoring the impact of quality of services on quality of processes. In 2nd IEEE / IFIP International Workshop on Business-Driven IT Management (BDIM 2007), 2007. Munich, Germany. [ bib ]
[28] Christof Momm, Robert Malec, and Sebastian Abeck. Towards a model-driven development of monitored processes. In 8. Internationale Tagung Wirtschaftsinformatik (WI2007), 2007. Karlsruhe, Germany. [ bib | Abstract ]
An integrated management of business processes demands a strictly process-oriented development of the supporting IT. Process-orientation is especially promoted by Service-Oriented Architectures (SOA), where loosely coupled business services are being composed to executable processes. In this paper we present a model-driven methodology for a top-down development of a process-oriented IT support based on a SOA. In contrary to existing approaches we also include the monitoring required for business process controlling and introduce metamodels for the specification of process performance indicators in conjunction with the necessary monitoring. Furthermore, we show how these models are transformed to executable process definitions extended by the required monitoring activities.
[29] Christoph Rathfelder. Eine Managementinfrastruktur für die überwachung komponierter Webservices. Master's thesis, Universität Karlsruhe (TH), Karlsruhe, 2007. [ bib | .pdf ]
[30] Richard Rhinelander. Components have no interfaces! In Proceedings of the 12th International Workshop on Component Oriented Programming (WCOP 2007), Ralf H. Reussner, Clemens Szyperski, and Wolfgang Weck, editors, 2007, volume 2007-13 of Interne Berichte. Universität Karlsruhe, Fakultät für Informatik, Karlsruhe, Germany. 2007. [ bib | http ]
[31] Ralf H. Reussner, Steffen Becker, Heiko Koziolek, Jens Happe, Michael Kuperberg, and Klaus Krogmann. The Palladio Component Model. Interner Bericht 2007-21, Universität Karlsruhe (TH), 2007. October 2007. [ bib | .pdf ]
[32] SPEC. SPECjms2007 - First industry-standard benchmark for enterprise messaging servers (JMS 1.1). Standard Performance Evaluation Corporation, 2007. SPECtacular Performance Award. [ bib | http ]
[33] Steffen Becker, Carlos Canal, Nikolay Diakov, Juan Manuel Murillo, Pascal Poizat, and Massimo Tivoli, editors. Proceedings of the Third International Workshop on Coordination and Adaption Techniques for Software Entities (WCAT 2006), Nantes, France, July 2006, volume 189. Electronic Notes in Theoretical Computer Science, 2007. [ bib | http ]

 TOP

Publications 2006

[1] Lucia Kapova. SOFA as platform for SOA applications. In IRTGW 2006 workshop, November/November 2006. GITO-Verlag, Schloss Dagstuhl, Germany. November/November 2006. [ bib ]
[2] Pierre Parrend and Stéphane Frénot. A security analysis for home gateway architectures. In International Conference on Cryptography, Coding & Information Security, CCIS 2006, November 24-26, Venice, Italy, November 2006. [ bib | http | Abstract ]
Providing Services at Home has become over the last few years a very dynamic and promising technological domain. It is likely to enable wide dissemination of secure and automated living environments. We propose a methodology for identifying threats to Services at Home Delivery systems, as well as a threat analysis of a multi-provider Home Gateway architecture. This methodology is based on a dichotomous positive/preventive study of the target system: it aims at identifying both what the system must do, and what it must not do. This approach completes existing methods with a synthetic view of potential security flaws, thus enabling suitable measures to be taken. Security implications of the evolution of a given system become easier to deal with. A prototype is built based on the conclusions of this analysis
[3] Steffen Becker, Jens Happe, and Heiko Koziolek. Putting Components into Context: Supporting QoS-Predictions with an explicit Context Model. In Proc. 11th International Workshop on Component Oriented Programming (WCOP'06), Ralf Reussner, Clemens Szyperski, and Wolfgang Weck, editors, July 2006, pages 1-6. [ bib | .pdf | Abstract ]
The evaluation of Quality of Service (QoS) attributes in early development stages of a software product is an active research area. For component-based systems, this yields many challenges, since a component can be deployed and used by third parties in various environments, which influence the functional and extra-functional properties of a component. Current component models do not reflect these environmental dependencies sufficiently. In this position statement, we motivate an explicit context model for software components. A context model exists for each single component and contains its connections, its containment, the allocation on hard- and software resources, the usage profile, and the perceived functional and extra-functional properties in the actual environment.
[4] Heiko Koziolek, Jens Happe, and Steffen Becker. Parameter Dependent Performance Specification of Software Components. In Proc. 2nd Int. Conf. on the Quality of Software Architectures (QoSA'06), Christine Hofmeister, Ivica Crnkovic, Ralf H. Reussner, and Steffen Becker, editors, July 2006, volume 4214 of Lecture Notes in Computer Science, pages 163-179. Springer-Verlag Berlin Heidelberg. July 2006. [ bib | .pdf | Abstract ]
Performance predictions based on design documents aim at improving the quality of software architectures. In component-based architectures, it is difficult to specify the performance of individual components, because it depends on the deployment context of a component, which may be unknown to its developers. The way components are used influences the perceived performance, but most performance prediction approaches neglect this influence. In this paper, we present a specification notation based on annotated UML diagrams to explicitly model the influence of parameters on the performance of a software component. The UML specifications are transformed into a stochastical model that allows the prediction of response times as distribution functions. Furthermore, we report on a case study performed on an online store. The results indicate that more accurate predictions could be obtained with the newly introduced specification and that the method was able to support a design decision on the architectural level in our scenario.
[5] Jochen Anderer, Rainer Bloch, Thomas Mohaupt, Rainer Neumann, Alexa Schumacher, Olaf Seng, Frank Simon, Adrian Trifu, and Mircea Trifu. Methoden und werkzeuge zur sicherung der inneren qualität bei der evolution objektorientierter softwaresysteme. Technical Report 1-6-6/06, FZI Forschungszentrum Informatik, June 2006. [ bib ]
[6] Pierre Parrend and Stéphane Frénot. Secure component deployment in the OSGirelease 4 platform. Technical Report RT-0323, INRIA, June 2006. [ bib | http | Abstract ]
Last years have seen a dramatic increase in the use of component platforms, not only in classical application servers, but also more and more in the domain of Embedded Systems. The OSGi(tm) platform is one of these platforms dedicated to lightweight execution environments, and one of the most prominent. However, new platforms also imply new security flaws, and a lack of both knowledge and tools for protecting the exposed systems.> This technical report aims at fostering the understanding of security mechanisms in component deployment. It focuses on securing the deployment of components. It presents the cryptographic mechanisms necessary for signing OSGi(tm) bundles, as well as the detailed process of bundle signature and validation. We also present the SFelix platform, which is a secure extension to Felix OSGi(tm) framework implementation. It includes our implementation of the bundle signature process, as specified by OSGi(tm) Release 4 Security Layer. Moreover, a tool for signing and publishing bundles, SFelix JarSigner, has been developed to conveniently integrate bundle signature in the bundle deployment process.
[7] Pierre Parrend, Yvan Royon, and Noha Ibrahim. Service-oriented distributed communities in residential environments. In 1st IEEE International Workshop on Services Integration in Pervasive Environments, Lyon, France, June 2006. [ bib | http ]
[8] Klaus Krogmann. Entwicklung und Transformation eines EMF-Modells des Palladio Komponenten-Meta-Modells. Master's thesis, University of Oldenburg, Germany, May 2006. [ bib | .pdf | Abstract ]
Modellgetriebene Entwicklung [20] verspricht auf Basis abtrakter Software-Modelle, die beispielsweise in einer Notation wie der Unified Modelling Langage vorliegen, automatisiert kompilierbaren Software-Quellcode erzeugen zu können. Dadurch könnten änderungen am Software-Modell ins kürzester Zeit in neuen Programmversionen resultieren und umgekehrt. Durch die Erhöhung des Grades der Automatisierung bei der Erzeugung von Software-Quellcode könnten Fehler minimiert werden. über die Trennung von Software-Modell und generiertem Software-Quellcode soll sich zusätzlich eine Plattformunabhängigkeit des Software-Modells erreichen lassen. Insgesamt soll auf diese Weise eine Steigerung der Effizienz von Software-Entwicklungsprozessen erreicht werden. Die Möglichkeiten der modellgetriebenen Entwicklung, auch unter dem Akronym MDA (Modell Driven Architecture) bekannt, stehen und fallen dabei mit der Mächtigkeit der verwendeten Werkzeuge. Je mehr Programmieraufwand durch Werkzeuge abgenommen wird, desto schneller kann die Entwicklung neuer Versionen erfolgen. Die Stärken von MDA werden vor allem in der permanenten Synchronisation zwischen Software-Modell und Software-Quellcode gesehen. Die Abstraktion von Software- Quellcode in Form eines Software-Modells bleibt stets konsistent zur Ausprägung als Software-Quellcode und umgekehrt ein Zugriff auf die weniger komplexe Abstraktion des Software-Quellcodes bleibt damit permanent bestehen. Die Ausgangsbasis für eine modellgetriebene Entwicklung (in Vorwärtsrichtung) bildet dabei ein Domänenmodell, dass die Modell-Elemente der Anwendungsdomäne beschreibt. Da nicht Instanzen von Modellen der Anwendungsdomäne beschrieben werden, sondern Modelle gültiger Modell-Instanzen, handelt es sich bei Domänenmodellen um Meta-Modelle.
[9] Pierre Parrend and Stéphane Frénot. Dependability for component systems deployment. Poster, first EuroSys Conference, April 2006. [ bib | http | Abstract ]
Operating Systems and Server Platforms tend to be increasingly organized as sets of components, which enables greater flexibility, as well as easier system upgrade. Even though such architectures evolve more easily as monolithic ones, they are also more difficult to secure. Bringing new components in these kind of systems means also not being able to always assert component reliability and innocuousness We conduct experiments on component system dependability validation using the OSGi platform in the frame of the IST Muse Project for wide band Service at Home delivery. OSGi provides an execution and life cycle management environment for embedded applications in Java [OSGi05]. Deployment is the first step of component management life cycle, after development is completed . It is made up of component publishing, discovery, dependency resolution, downloading, installation, configuration, starting [Hall99]. Dependability [Avizienis00] of a system is the conjunction of several properties of systems : availability, reliability (continuity of correct service), safety (absence of catastrophic consequences), confidentiality, integrity, maintainability. Dependability for component deployment can thus be defined as 'Load and install the right component at the right time'. The right component is one that the user knows not to be malicious, and which provides the desired service. As far as each user do not know a priori each component, two criteria can be applied in order to check that a component is not malicious : either the component is trusted, or it is possible to assert that the code can not execute dangerous actions (or combination of both). The first step to provide a secure component system is to ensure that deployed components come from a trusted provider. This is done through cryptographic signature. Two different technologies exists : signature validation through Public Key Certificate (X.509) Path, often used in closed systems (where a Certification Authority provides trustworthy certificates), or through PGP and publicly available key servers, often used for open Operating Systems based on components (also named modules, or packages). RPM, and Gentoo Portage notably use this approach. Second step is to ensure safe execution of installed components. Extreme case is Java Sandboxing, where all actions that could possibly be harmful are blocked. This enables untrusted applets to be executed safely. In most cases however, it is necessary to provide access to the file system, or to the network, but not to the whole system. This is achieved through execution permissions. The Spin OS, based on the Modula-3 language, or VINO OS, use this mechanism. OSGi also provides permissions for controlling component interoperability that are used together with Java permissions. Our work targets the design and development of a dependable OSGi framework. Few work having been done � surprisingly enough � about this subject, it is necessary to provide a mean of validating OSGi components (named bundles) at load time. OSGi specifications propose to sign bundles through X.509 Signature Certificates, which are included in the archive. This permits validation of the integrity of the bundle, and to authenticate the signer(s). However, confidentiality is not possible, because bundle signing specifications consider that one must be able to use a bundle without taking care of the signature. This is not a problem in open source projects � where the code is anyhow available � but builds a severe restriction in closed systems, enterprise systems, or Service at Home services delivery. We propose two solutions : the first one is to put the encrypted component in the signature file. This implies only minor changes to OSGi specifications and prevents unauthorized users from installing the bundle. The other requires the existence of one (or several) centralized repository(ies). It is then possible to connect the client and the server through a secure communication protocol (such as HTTPS, SSH, or over Virtual Private Networks). In this configuration, OSGi security specifications are too heavy. The second aspect of bundle validation at load time is to ensure that permissions that are necessary for correct execution do not go against existing policies, and that the code does not contain security leaks. Guaranteeing that permissions are valid in the Java world is normally done at runtime, and can cause misbehavior of the program. We propose a new permission validation protocol that enables permission checking before bundle download, thus preventing performance downfall due to loading of unsuitable code: first, a bundle description file containing policy parameters required for its correct execution is downloaded by the client. If this policy is compatible with actual client security policy, the bundle itself is downloaded. Then, coherence of code and description file is checked. It can be extended with Proof Carrying Code (PCC) for guaranteeing code harmlessness. Realizing experiments on dependable component systems on the OSGi platform proves to be highly valuable as well for general analysis of middleware platforms as for the study of OSGi itself, which specifications seem to need to adapt to real network systems � and not only to closed world. This ongoing work is still to be completed by performance analysis of the different solutions. Being specified as a stand-alone platform with some component life cycle management facilities (validation, installation, start, ...) but without any consideration on how these components are downloaded, OSGi framework seems to have still a good evolution potential. [OSGi05] OSGi Service Platform, Core Specification Release 4, OSGi Alliance, 2005. [Hall99] R.S. Hall, D. Heimbigner, & A.L. Wolf, A Cooperative Approach to Support Software Deployment Using the Software Dock, ISCE 1999. [Avizienis00] A. Avizienis, J.C. Laprie & B. Randell, Fundamental concepts of dependability, Technical Report LAAS (Toulouse, France), 2000.
[10] Heiko Koziolek and Viktoria Firus. Parametric Performance Contracts: Non-Markovian Loop Modelling and an Experimental Evaluation. In Proc. of the 5th Int. Workshop on Formal Foundations of Embedded Software and Component-Based Software Architectures (FESCA'06), Juliana Kuester-Filipe, Iman H. Poernomo, and Ralf H. Reussner, editors, March 2006, volume 176(2) of ENTCS, pages 69-87. Elsevier Science Inc. March 2006. [ bib | .pdf | Abstract ]
Even with todays hardware improvements, performance problems are still common in many software systems. An approach to tackle this problem for component-based software architectures is to predict the performance during early development stages by combining performance specifications of prefabricated components. Many existing methods in the area of component-based performance prediction neglect several influence factors on the performance of a component. In this paper, we present a method to calculate the performance of component services while including influences of external services and different usages. We use stochatic regular expressions with non-Markovian loop iterations to model the abstract control flow of a software component and probability mass functions to specify the time consumption of internal and external services in a fine grain way. An experimental evaluation is reported comparing results of the approach with measurements on a component-based webserver. The evaluation yields that using measured data as inputs, our approach can predict the mean response time of a service with less than 2 percent deviation from measurements taken when executing the service in our scenarios.
[11] Ralf H. Reussner and Wilhelm Hasselbring. Handbuch der Software-Architektur. dPunkt.verlag Heidelberg, 1 edition, March 2006. [ bib ]
[12] Steffen Becker, Antonio Brogi, Ian Gorton, Sven Overhage, Alexander Romanovsky, and Massimo Tivoli. Towards an Engineering Approach to Component Adaptation. In Architecting Systems with Trustworthy Components, 2006, volume 3938 of Lecture Notes in Computer Science, pages 193-215. Springer-Verlag Berlin Heidelberg. 2006. [ bib | Abstract ]
Component adaptation needs to be taken into account when developing trustworthy systems, where the properties of component assemblies have to be reliably obtained from the properties of its constituent components. Thus, a more systematic approach to component adaptation is required when building trustworthy systems. In this paper, we illustrate how (design and architectural) patterns can be used to achieve component adaptation and thus serve as the basis for such an approach. The paper proposes an adaptation model which is built upon a classification of component mismatches, and identifies a number of patterns to be used for eliminating them. We conclude by outlining an engineering approach to component adaptation that relies on the use of patterns and provides additional support for the development of trustworthy component-based systems.
[13] Steffen Becker, Carlos Canal, Nikolay Diakov, Juan Manuel Murillo, Pascal Poizat, and Massimo Tivoli. Coordination and Adaptation Techniques: Bridging the Gap Between Design and Implementation. In Object-Oriented Technology, ECOOP 2006 Workshop Reader, ECOOP 2006 Workshops, Nantes, France, July 3-7, 2006, Final Reports, Mario Südholt and Charles Consel, editors, 2006, volume 4379 of Lecture Notes in Computer Science, pages 72-86. Springer-Verlag Berlin Heidelberg. 2006. [ bib ]
[14] Steffen Becker, Aleksander Dikanski, Nils Drechsel, Aboubakr Achraf El Ghazi, Jens Happe, Ihssane El-Oudghiri, Heiko Koziolek, Michael Kuperberg, Andreas Rentschler, Ralf H. Reussner, Roman Sinawski, Matthias Thoma, and Marko Willsch. Modellgetriebene Software-Entwicklung - Architekturen, Muster und Eclipse-basierte MDA. Technical report, Universität Karlsruhe (TH), 2006. [ bib | http | Abstract ]
Modellgetriebene Software-Entwicklung ist in den letzten Jahren insbesondere unter Schlagworten wie MDA und MDD zu einem Thema von allgemeinem Interesse für die Software-Branche geworden. Dabei ist ein Trend weg von der Code-zentrierten Software-Entwicklung hin zum (Architektur-) Modell im Mittelpunkt der Software- Entwicklung festzustellen. Modellgetriebene Software-Entwicklung verspricht eine stetige automatisierte Synchronisation von Software-Modellen verschiedenster Ebenen. Damit einher geht eine mögliche Verkürzung von Entwicklungszyklen und mehr Produktivität. Primär wird nicht mehr reiner Quellcode entwickelt, sondern Modelle und Transformationen übernehmen als eine höhere Abstraktionsebene die Rolle der Entwicklungssprache für Software-Produkte. Derweil ist eine Evolution von Werkzeugen zur modellgetriebenen Entwicklung festzustellen, die einen zusätzlichen Gewinn an Produktivität und Effizienz ermöglichen sollen. Waren die Werkzeuge zur Jahrtausendwende in ihrer Mächtigkeit noch stark eingeschränkt, weil die Transformationssprachen nur eine begrenzte Ausdrucksstärke besaßen und die verfügbaren Werkzeuge eine nur geringe Integration von modellgetriebenen Entwicklungsprozessen boten, so ist heute mit den Eclipse-basiertenWerkzeugen rund um EMF ein deutlicher Fortschritt spürbar. In der Eclipse-Plattform werden dabei als Plugins verschiedenste Aspekte der modellgetriebenen Entwicklung vereint: � Modellierungswerkzeuge zur Erstellung von Software-Architekturen � Frameworks für Software-Modelle � Erstellung und Bearbeitung von Transformationen � Durchführung von Transformationen � Entwicklung von Quellcode Der Seminartitel enthält eine Reihe von Schlagworten: �MDA, Architekturen, Muster, Eclipse�. Unter dem Dach von MDA ergeben sich zwischen diesen Schlagworten Zusammenhänge, die im Folgenden kurz skizziert werden. Software-Architekturen stellen eine allgemeine Form von Modell für Software dar. Sie sind weder auf eine Beschreibungssprache noch auf eine bestimmte Domänen beschränkt. Im Zuge der Bemühungen modellgetriebener Entwicklung lassen sich hier Entwicklungen hin zu Standard-Beschreibungssprachen wie UML aber auch die Einführung von domänen-spezifischen Sprachen (DSL) erkennen. Auf diesen weiter formalisierten Beschreibungen von Software lassen sich schließlich Transformationen anwenden. Diese können entweder zu einem weiteren Modell (�Model-to-Model�) oder einer textuellen Repräsentation (�Model-to-Text�) erfolgen. In beiden Fällen spielen Muster eine wichtige Rolle. Transformationen kapseln in gewisser Weise wiederholt anwendbares Entwurfs-Wissen (�Muster�) in parametrisierbaren Schablonen. Eclipse stellt schließlich eine freie Plattform dar, die in letzter Zeit zunehmend Unterstützung für modellgetriebene Entwicklung bietet. In die Bemühungen zur Unterstützung modellgetriebener Entwicklung fällt auch das im Mai 2006 angekündigte �Eclipse Modeling Project�, das als �top level project� auf die Evolution und Verbreitung modellgetriebener Entwicklungs-Technologien in Eclipse zielt. Das Seminar wurde wie eine wissenschaftliche Konferenz organisiert: Die Einreichungen wurden in einem peer-to-peer-Verfahren begutachtet (vor der Begutachtung durch den Betreuer) und in verschiedenen �Sessions� wurden die �Artikel� an zwei �Konferenztagen� präsentiert. Es gab �best paper awards� und einen eingeladenen Gastredner, Herrn Achim Baier von der itemis AG & Co KG, der dankenswerter Weise einen aufschlussreichen Einblick in Projekte mit modellgetriebener Entwicklung in der Praxis gab. Die �best paper awards� wurden an Herrn El-Ghazi und Herrn Rentschler verliehen, denen hiermit nochmal herzlich zu dieser herausragenden Leistung gedankt wird.
[15] Steffen Becker, Lars Grunske, Raffaela Mirandola, and Sven Overhage. Performance Prediction of Component-Based Systems: A Survey from an Engineering Perspective. In Architecting Systems with Trustworthy Components, Ralf Reussner, Judith Stafford, and Clemens Szyperski, editors, 2006, volume 3938 of Lecture Notes in Computer Science, pages 169-192. Springer-Verlag Berlin Heidelberg. 2006. [ bib | Abstract ]
More and more complex embedded system use component based development. Non-functional properties of component model are used to predict performance of the final system. We analyzed the thread properties of component in network processor based system. The proposed method is based on the thread properties and provides quantified performance of the component, and so we can predict the performance of final system at composing time. The experiments show that the difference between theoretic and simulation result is less than 10%.
[16] Steffen Becker, Wilhelm Hasselbring, Alexandra Paul, Marko Boskovic, Heiko Koziolek, Jan Ploski, Abhishek Dhama, Henrik Lipskoch, Matthias Rohr, Daniel Winteler, Simon Giesecke, Roland Meyer, Mani Swaminathan, Jens Happe, Margarete Muhle, and Timo Warns. Trustworthy software systems: a discussion of basic concepts and terminology. SIGSOFT Softw. Eng. Notes, 31(6):1-18, 2006, ACM, New York, NY, USA. [ bib | DOI | .pdf | Abstract ]
Basic concepts and terminology for trustworthy software systems are discussed. Our discussion of definitions for terms in the domain of trustworthy software systems is based on former achievements in dependable, trustworthy and survivable systems. We base our discussion on the established literature and on approved standards. These concepts are discussed in the context of our graduate school TrustSoft on trustworthy software systems. In TrustSoft, we consider trustworthiness of software systems as determined by correctness, safety, quality of service (performance, reliability, availability), security, and privacy. Particular means to achieve trustworthiness of component-based software systems � as investigated in TrustSoft � are formal verification, quality prediction and certification; complemented by fault diagnosis and fault tolerance for increased robustness.
[17] Steffen Becker and Ralf Reussner. The Impact of Software Component Adaptation on Quality of Service Properties. L objet, 12(1):105-125, 2006, RSTI. [ bib | Abstract ]
Component adapters are used to bridge interoperability problems between the required interface of a component and the provided interface of another component. As bridging functional mismatches is frequently required, the use of adapters is unavoidable. In these cases an impact on the Quality of Service resulting from the adaptation is often undesired. Nevertheless, some adapters are deployed to change the Quality of Service on purpose when the interoperability problem results from mismatching Quality of Service. This emphasises the need of adequate prediction models for the impact of component adaptation on the Quality of Service characteristics. We present research on the impact of adaptation on the Quality of Service and focus on unresolved issues hindering effective predictions nowadays.
[18] Erik Burger. Query Infrastructure and OCL within the SAP Project “Modeling Infrastructure”. Studienarbeit, Universität Karlsruhe (TH), 2006. [ bib | .pdf ]
[19] P. Freudenstein, W. Juling, L. Liu, F. Majer, A. Maurer, C. Momm, and D. Ried. Architektur für ein universitätsweit integriertes informations- und dienstmanagement. In INFORMATIK 2006, 2006, volume P-93 of Lecture Notes in Informatics, pages 50-54. Springer, Dresden. 2006. [ bib ]
[20] Thomas Goldschmidt. Grammar Based Code Transformation for the Model-Driven Architecture. Master's thesis, Hochschule Furtwangen University, Germany, 2006. [ bib | Abstract ]
Model-driven code generation has been investigated in traditional and object-oriented design paradigms; significant progress has been made. It offers many advantages including the rapid development of high quality code. Errors are reduced and the consistency between the design and the code is retained, in comparison with a purely manual approach. Here, a model-driven code generation approach based on graph transformations for aspect-oriented development is proposed. The approach has two main transformation activities. The first activity transforms a visual (graphical) model of the design into a formal, text-based notation that can be readily processed. The graphical model is created by the software designer and uses a UML profile for aspect-oriented software (i.e., FDAF) to represent aspects and their components. XML is the target notation for this step; the transformation uses the XML meta-model to ensure that the output complies with the language. The second activity transforms the XML model into AspectJ source code. The transformation uses the AspectJ meta-model to ensure the output complies with the language. The transformations from the extended UML model to XML and from XML to AspectJ code are fully automated. The transformation algorithms are based on graph transformations; tool support has been developed. Key technical issues in the approach are discussed, including performance, the amount of code generated, correctness, and adaptability, in addition to a comparison of the proposal with existing alternative approaches. The approach has been validated on three example systems: a banking system, classroom scheduling system, and an insurance system. The banking system example is presented in the paper.
[21] Jens Happe, Heiko Koziolek, and Ralf H. Reussner. Parametric Performance Contracts for Software Components with Concurrent Behaviour. In Proceedings of the 3rd International Workshop on Formal Aspects of Component Software (FACS), Frank S. de Boer and Vladimir Mencl, editors, 2006, volume 182 of Electronic Notes in Theoretical Computer Science, pages 91-106. [ bib | .pdf | Abstract ]
Performance prediction methods for component-based software systems aim at supporting design decisions of software architects during early development stages. With the increased availability of multicore processors, possible performance gains by distributing threads and processes across multiple cores should be predictable by these methods. Many existing prediction approaches model concurrent behaviour insufficiently and yield inaccurate results due to hard underlying assumptions. In this paper, we present a formal performance prediction approach for component-based systems, which is parameterisable for the number of CPUs or CPU cores. It is able to predict the response time of component services for generally distributed execution times. An initial, simple case study shows that this approach can accurately predict response times of multithreaded software components in specific cases. However, it is limited if threads change the CPU during their execution, if the effect of processor cache thrashing is present, and if the memory bus is heavily used.
[22] Wilhelm Hasselbring and Ralf H. Reussner. Toward Trustworthy Software Systems. IEEE Computer, 30(4):91-92, 2006. [ bib | .pdf ]
[23] Dirk Heuzeroth, Uwe Assmann, Mircea Trifu, and Volker Kuttruff. The compost, compass, inject/j and recoder tool suite for invasive software composition: Invasive composition with compass aspect-oriented connectors. In Generative and Transformational Techniques in Software Engineering (GTTSE), International Summer School. Revised Papers, 2006, volume 4143 of Lecture Notes in Computer Science, pages 357-377. Springer. 2006. [ bib ]
[24] Lucia Kapova. Support for interactive experiments and demonstrations: Conception of Virtual Laboratory (in the frame of CNA VirtualLab project). Master's thesis, Technical University of Kosice, Kosice, 2006. [ bib ]
[25] Lucia Kapova and Frantisek Jakab. Progressive virtual laboratory solution. In The 7th international conference on Virtual University, 2006. E-Academia Slovaka, Bratislava. 2006. [ bib | Abstract ]
This paper deals with the remote laboratory VirtualLAB (VL, http://vl.cnl.tuke.sk) solution developed at Computer Network Laboratory (Technical university of Ko�ice). Aim of the project is to design and implement the conception of VirtualLAB, which provides remote access to specified laboratory network devices. This project will be used in educational process as a part of CNAP (Cisco Networking Academy Program). Using this tool, students can access and configure network devices like routers and switches remotely from any place via Internet.
[26] Lucia Kapova, Frantisek Jakab, Vladimir Andoga, and Michal Nagy. Virtual Laboratory: Component Based Architecture Implementation Experience. In 7-th International Scientific Conf. on Electronic Computers and Informatics, 2006. Herlany. [ bib | Abstract ]
In response to an increased demand for graduate level Information Assurance (IA) education, the SPS/MSCIT (School for Professional Studies/Masters of Science in Computer Information Technology) at Regis University, Denver developed a series of IA courses in late fall of 2003. In addition to the technical, policy and management course content, modern ethical decision making-techniques were integrated into the classroom courses. The courses were developed with the intent to deliver them to online students via the WebCT platform. The course development work was divided into three phases; phase 1, selection of appropriate ethical practices and decision-making techniques from content experts, professional organization, and standards bodies, phase 2, design, development and construction of supporting instructional labs associated with the standards using the MSCIT virtual laboratory at Regis University, and finally phase 3, implementation of the supporting classroom and online Vlabs.
[27] Heiko Koziolek, Viktoria Firus, Steffen Becker, and Ralf H. Reussner. Handbuch der Software-Architektur, chapter Bewertungstechniken für die Performanz, pages 311-326. dPunkt.verlag Heidelberg, 2006. [ bib ]
[28] Heiko Koziolek and Jens Happe. A QoS Driven Development Process Model for Component-Based Software Systems. In Proc. 9th Int. Symposium on Component-Based Software Engineering (CBSE'06), Ian Gorton, George T. Heineman, Ivica Crnkovic, Heinz W. Schmidt, Judith A. Stafford, Clemens A. Szyperski, and Kurt C. Wallnau, editors, 2006, volume 4063 of Lecture Notes in Computer Science, pages 336-343. Springer-Verlag Berlin Heidelberg. 2006. [ bib | .pdf | Abstract ]
Non-functional specifications of software components are considered an important asset in constructing dependable systems, since they enable early Quality of Service (QoS) evaluations. Several approaches for the QoS analysis of component-based software architectures have been introduced. However, most of these approaches do not consider the integration into the development process sufficiently. For example, they envision a pure bottom-up development or neglect that system architects do not have complete information for QoS analyses at their disposal. We extent an existing component-based development process model by Cheesman and Daniels to explicitly include early, model-based QoS analyses. Besides the system architect, we describe further involved roles. Exemplary for the performance domain, we analyse what information these roles can provide to construct a performance model of a software architecture.
[29] Michael Kuperberg. Influence of Execution Environments on the Performance of Software Components. In Proceedings of the 2nd International Research Training Groups Workshop, Dagstuhl, Germany, November 6 - 8, 2006, Jens Happe, Heiko Koziolek, and Matthias Rohr, editors, 2006, volume 3 of Reihe Trustworthy Software Systems. [ bib | http | Abstract ]
Performance prediction of component-based software systems is needed for systematic evaluation of design decisions, but also when an application's execution system is changed. Often, the entire application cannot be benchmarked in advance on its new execution system due to high costs or because some required services cannot be provided there. In this case, performance of bytecode instructions or other atomic building blocks of components can be used for performance prediction. However, the performance of bytecode instructions depends not only on the execution system they use, but also on their parameters, which are not considered by most existing research. In this paper, we demonstrate that parameters cannot be ignored when considering Java bytecode. Consequently, we outline a suitable benchmarking approach and the accompanying challenges.
[30] Christof Momm, Christoph Rathfelder, and Sebastian Abeck. Towards a Manageability Infrastructure for a Management of Process-Based Service Compositions. C&m research report, Cooperation & Management, 2006. [ bib | .pdf | Abstract ]
The management of process-oriented service composition within a dynamic environment, where the employed core services are offered on service marketplaces and dynamically included into the composition on basis of Service Level Agreements (SLA), demands for a service management application that takes into account the specifics of processoriented compositions and supports their automated provisioning. As a first step towards such an application, in this paper we introduce the conceptual design for an architecture and implementation of an interoperable and flexible manageability infrastructure offering comprehensive monitoring and control functionality for the management of service compositions. To achieve this, our approach is based on well-understood methodologies and standards from the area of application and web service management.
[31] Olaf Seng, Johannes Stammel, and David Burkhart. Search-based determination of refactorings for improving the class structure of object-oriented systems. In 8th Annual conference on Genetic and evolutionary computation, 2006, pages 1909-1916. ACM Press, Seattle, Washington, USA. 2006. [ bib | Abstract ]
A software system's structure degrades over time, a phenomenon that is known as software decay or design drift. Since the quality of the structure has major impact on the maintainability of a system, the structure has to be reconditioned from time to time. Even if recent advances in the fields of automated detection of bad smells and refactorings have made life easier for software engineers, this is still a very complex and resource consuming task.Search-based approaches have turned out to be helpful in aiding a software engineer to improve the subsystem structure of a software system. In this paper we show that such techniques are also applicable when reconditioning the class structure of a system. We describe a novel search-based approach that assists a software engineer who has to perform this task by suggesting a list of refactorings. Our approach uses an evolutionary algorithm and simulated refactorings that do not change the system's externally visible behavior. The approach is evaluated using the open-source case study JHotDraw.
[32] Niels Streekmann and Steffen Becker. A Case Study for Using Generator Configuration to Support Performance Prediction of Software Component Adaptation. In Short Paper Proceedings of the Second International Conference on Quality of Software Architectures (QoSA2006), Västerås, Sweden, June 27 - 29, 2006, TR 2006-10, University of Karlsruhe (TH), Christine Hofmeister, Ivica Crnkovic, Ralf H. Reussner, and Steffen Becker, editors, 2006. [ bib | Abstract ]
In order to put component based software engineering into practice we have to consider the eect of software component adaptation. Adaptation is used in existing systems to bridge interoperability problems between bound interfaces, e.g., to integrate existing legacy systems into new software architectures. In CBSE, one of the aims is to predict the properties of the assembled system from its basic parts. Adaptation is a special case of composition and can be treated consequently in a special way. The precision of the prediction methods can be increased by exploiting additional knowledge about the adapter. This work motivates the use of adapter generators which simultaneously produce prediction models.
[33] Robert Vaupel, Ulrich Hild, and Stefan Wirag. z/OS Workload Management - Überblick, Stärken und Ausblick (z/OS Workload Management - Overview, Strength and Outlook). it - Information Technology, 48(5):294-303, 2006, Oldenbourg Wissenschaftsverlag GmbH. [ bib | http ]
[34] Robert Vaupel and Chris Vignola. Managing Heterogenouse Workloads, chapter 10, pages 185-200. Larstan Publishing Inc., 2006. [ bib ]
[35] Daniel Winteler, Heiko Koziolek, Jens Happe, and Henrik Lipskoch. Die urheberrechtliche Problematik geschlossener Linux Kernelmodule aus Sicht des deutschen Rechts. In Proceedings of the 3rd Workshop Informationsysteme mit Open Source (ISOS2006), Dresden, Germany, Heinrich Jasper and Olaf Zukunft, editors, 2006, Lecture Notes in Informatics. [ bib | .pdf | Abstract ]
Bei den Entwicklern proprietärer Informationssysteme entsteht im Zusammenhang mit GPL-lizenzierter Software eine erhebliche Unsicherheit. Dabei ist der virale Effekt der GPL auf Linux Kernelmodule (LKM) ein heftig diskutiertes Problem. Unbeachtet bleibt bei dieser Diskussion, dass sich die Rechtslage in Deutschland und Europa von der in den Vereinigten Staaten unterscheidet. Dieser Beitrag beleuchtet exemplarisch, welche Konstellationen streng zu trennen sind und wie sich die Rechtslage bei Entwicklung und Vertrieb von LKMs in Deutschland darstellt. Die gewonnenen Erkenntnisse können auf andere Projekte übertragen werden, die GPL-lizenzierte Software einbeziehen.
[36] Short Paper Proceedings of the Second International Conference on Quality of Software Architectures (QoSA2006), Västerås, Sweden, June 27 - 29, 2006, TR 2006-10, University of Karlsruhe (TH). Technical Report 2006-10, Universität Karlsruhe (TH), 2006. [ bib ]
[37] Ralf H. Reussner, Johannes Mayer, Judith A. Stafford, Sven Overhage, Steffen Becker, and Patrick Schroeder J. (Eds.), editors. Quality of Software Architectures and Software Quality: First International Conference on the Quality of Software Architectures, QoSA 2005, and Second International Workshop on Software Quality, SOQUA 2005, Erfurt, Germany, volume 3712 of Lecture Notes in Computer Science. Springer-Verlag Berlin Heidelberg, 2006. [ bib | Abstract ]
Terrain has a big impact on how battlefield situations unfold primarily because of its effects on observability, mobility, and restriction of fields of fire. As armed forces of the information age come within each other's sensor coverage, information about them is rapidly conveyed to their opponents. Terrain imposes constraints and opens opportunities for the creative use of Battlefield Operating Systems (BOS) and the capabilities and limitations of available troops, vehicles, systems, and materiel. Thus, understanding terrain, and its tactical import is essential for a force to succeed in its missions. Future Force Warrior (FFW) and Future Combat Systems (FCS) initiatives are developing advanced functional capabilities to aid Soldiers in operations to control and hold ground. Adding robotic vehicles, sensors, and weapons creates a planning and coordination challenge for commanders, and highlights the need for autonomous robotic systems that effectively "understand" the tactical import of terrain and integrate that understanding into their situation awareness and behavior-generation processes. TAH-RI is reusable component software providing means of increasing readiness of Soldiers (e.g., in training and performance support systems) to integrate terrain understanding into battlefield decision-making processes, and means of enabling more autonomy in robots through terrain understanding for tactical behavior generation.

 TOP

Publications 2005

[1] Petr Hnetynka, Frantisek Plasil, Tomas Bures, Vladimir Mencl, and Lucia Kapova. SOFA 2.0 metamodel. Technical report, Dep. of SW Engineering, Charles University, December 2005. [ bib | Abstract ]
In this report, we present a new version of the SOFA component model � SOFA 2.0. SOFA component model now seamlessly integrates a component-based technology with service-oriented technology. Such a technology merge takes advantages of both approaches and allows for better management of features like dynamic reconfiguration, supporting multiple communication styles, heterogeneous applications, etc.
[2] Mircea Trifu and Volker Kuttruff. Capturing nontrivial concerns in object-oriented software. In Proceedings of the 12-th Working Conference on Reverse Engineering, November 2005, pages 99-108. IEEE. November 2005. [ bib ]
[3] Heiko Koziolek and Steffen Becker. Transforming Operational Profiles of Software Components for Quality of Service Predictions. In Proceedings of the 10th Workshop on Component Oriented Programming (WCOP2005), Glasgow, UK, Ralf H. Reussner, Clemens Szyperski, and Wolfgang Weck, editors, July 2005. [ bib | .pdf | Abstract ]
Current Quality-of-Service (QoS) predictions methods for component-based software systems disregard the influence of the operational profile for anticipating an architecture�s performance, reliability, security or safety. The operational profile captures the set of inputs and outputs to a software components. We argue, that a detailed operational profile especially for software components is necessary for accurate QoS-predictions and that a standardised form of it is needed. We demonstrate that components act as transformers to an operational profile and discuss that this transformation has to be described, so that QoS prediction methods are able to deliver appropriate results for component-based architectures.
[4] Steffen Becker. Using Generated Design Patterns to Support QoS Prediction of Software Component Adaptation. In Proceedings of the Second International Workshop on Coordination and Adaptation Techniques for Software Entities (WCAT 05), Carlos Canal, Juan Manuel Murillo, and Pascal Poizat, editors, 2005. [ bib | .pdf | Abstract ]
In order to put component based software engineering into practice we have to consider the eect of software component adaptation. Adaptation is used in existing systems to bridge interoperability problems between bound interfaces, e.g., to integrate existing legacy systems into new software architectures. In CBSE, one of the aims is to predict the properties of the assembled system from its basic parts. Adaptation is a special case of composition and can be treated consequently in a special way. The precision of the prediction methods can be increased by exploiting additional knowledge about the adapter. This work motivates the use of adapter generators which simultaneously produce prediction models.
[5] Steffen Becker, Carlos Canal, Juan Manuel Murillo, Pascal Poizat, and Massimo Tivoli. Design Time, Run Time and Implementation of Adaptation. In Report on the Second International Workshop on Coordination and Adaptation Techniques for Software Entities (WCAT'05), 2005. [ bib ]
[6] Viktoria Firus, Steffen Becker, and Jens Happe. Parametric Performance Contracts for QML-specified Software Components. In Formal Foundations of Embedded Software and Component-based Software Architectures (FESCA), 2005, volume 141 of Electronic Notes in Theoretical Computer Science, pages 73-90. [ bib | .pdf | Abstract ]
The performance of a software component heavily depends on the environment of the component. As a software component only justifies its investment when deployed in several environments, one can not specify the performance of a component as a constant (e.g., as a single value or distribution of values in its interface). Hence, classical component contracts allowing to state the component�s performance as a post-condition, if the environment realises a specific performance stated in the precondition, do not help. This fixed pair of preand postcondition do not model that a component can have very different performance figures depending on its context. Instead of that, parametric contracts are needed for specifying the environmental dependency of the component�s provided performance. In this paper we discuss the specification of dependencies of external calls for the performance metric response time. We present an approach using parametric contracts to compute the statistical distribution of response time as a discrete distribution in dependency of the distribution of response times of environmental services. We use the Quality of Service Modeling Language (QML) as a syntax for specifying distributions.
[7] Viktoria Firus, Heiko Koziolek, Steffen Becker, Ralf H. Reussner, and Wilhelm Hasselbring. Empirische Bewertung von Performanz-Vorhersageverfahren für Software-Architekturen. In Software Engineering 2005 Proceedings - Fachtagung des GI-Fachbereichs Softwaretechnik, Peter Liggesmeyer, Klaus Pohl, and Michael Goedicke, editors, 2005, volume 64 of GI-Edition Lecture Notes in Informatics, pages 55-66. [ bib | .pdf | Abstract ]
Die Architektur eines Software-Systems beeinflusst maßgeblich seine Qualit ätseigenschaften wie Performanz oder Zuverlässigkeit. Daher sind Architekturänderungen oft die einzige Möglichkeit, Mängel bei diesen Qualitätseigenschaften zu beheben. Je spa�ter diese A� nderungen an der Architektur wa�hrend des Software-Entwicklungsprozesses vorgenommen werden, desto teurer und riskanter sind sie. Aus diesem Grund ist eine frühzeitige Analyse verschiedener Architektur-Entwurfsalternativen bez üglich ihrer Auswirkungen auf Qualitätseigenschaften vorteilhaft. Dieser Artikel beschreibt die Evaluation dreier verschiedener Performanz-Vorhersageverfahren für Software-Architekturen hinsichtlich ihrer Eignung, korrekte Empfehlungen für frühzeitige Entwurfsentscheidungen zu geben. Zusätzlich sollen diese Vorhersageverfahren prüfen, ob extern vorgegebene Performanz-Anforderungen realisierbar sind. Die Performanz-Vorhersageverfahren � SPE�, � Capacity Planning� und � umlPSI� wurden empirisch durch 31 Teilnehmer untersucht, die eine Menge vorgegebener Alternativen beim Entwurf der Architektur eines Webservers zu bewerten hatten. Die Ergebnisse zeigen, dass Entwurfsalternativen mit allen Verfahren richtig bewertet wurden, sofern deutliche Auswirkungen auf die Performanz vorhanden waren. Ohne den Einsatz der Performanz-Vorhersageverfahren wurden häufiger weniger performante Entwurfsalternativen vorgeschlagen. Darüber hinaus konnte das Verfahren Capacity Planning die absoluten Werte bei den meisten Entwurfsalternativen relativ genau vorhersagen.
[8] Juraj Galba, Frantisek Jakab, and Lucia Kapova. Remote laboratory in education - VirtualLAB integration in e-learning education methods. In The 4th International Conference on Emerging e-learning Technologies and Applications, 2005, Information and Communications Technologies in Education. ELFA, Kosice. 2005. [ bib | Abstract ]
The article presents architecture of semivirtual campus technical infrastructure for Edinet project. Its aim is to integrate data network laboratories of multiple partners into single unified system accessible by students remotely via Internet. The architecture is defined to integrate various existing remotely accessible networking laboratories and education approaches. The intent was to reach maximum flexibility to support efficient sharing of lab equipment including a possibility to create temporary distributed lab topologies spanning multiple partners.
[9] Henning Groenda, Fabian Nowak, Patrick Rößler, and Uwe D. Hanebeck. Telepresence Techniques for Controlling Avatar Motion in First Person Games. In INTETAIN, 2005, pages 44-53. [ bib | .pdf | Abstract ]
First person games are computer games, in which the user experiences the virtual game world from an avatar's view. This avatar is the user's alter ego in the game. In this paper, we present a telepresence interface for the first person game Quake III Arena, which gives the user the impression of presence in the game and thus leads to identification with his avatar. This is achieved by tracking the user's motion and using this motion data as control input for the avatar. As the user is wearing a head-mounted display and he perceives his actions affecting the virtual environment, he fully immerses into the target environment. Without further processing of the user's motion data, the virtual environment would be limited to the size of the user's real environment, which is not desirable. The use of Motion Compression, however, allows exploring an arbitrarily large virtual environment while the user is actually moving in an environment of limited size.
[10] Jens Happe. Prediction Mean Service Execution Times of Software Components Based on Markov Models. In First International Conference on Quality of Software Architectures, volume 3712 of Lecture Notes in Computer Science, pages 53-70. Springer-Verlag Berlin Heidelberg, 2005. [ bib | .pdf | Abstract ]
One of the aims of component-based software engineering is the reuse of existing software components in different deployment contexts. With the redeployment of a component, its performance changes, since it depends on the performance of external services, the underlying hardware and software, and the operational profile. Therefore, performance prediction models are required that are able to handle these dependencies and use the properties of component-based software systems. Parametric contracts model the relationship of provided and required services of a component. In this paper, we analyse the influence of external services on the service execution time applying parametric contracts and a performance prediction algorithm based on Markov chains. We verbalise the assumptions of this approach and evaluate their validity with an experiment. We will see that most of the assumptions hold only under certain constraints.
[11] Jens Happe. Performance Prediction for Embedded Systems. In Trustworthy Software Systems, 2005, volume 2, pages 173-196. [ bib | .pdf | Abstract ]
In this paper, we discuss different approaches to performance prediction of embedded systems. We distinguish two categories of prediction models depending on the system type. First we consider prediction models for hard real-time systems. These are systems whose correctness depends on the ability to meet all deadlines. Therefore, methods to compute the worst case execution time of each process are required. Then the worst case execution times are used in combination with scheduling algorithms to proof the feasibility of the system on a given set of processors. Second we consider prediction models for soft real-time systems whose deadlines can be missed occasionally. Stochastic approaches which determine the probability of meeting a deadline are used in this case. We discuss these approaches with an example based on Stochastic Automaton Networks. Finally, we discuss the applicability of performance prediction models for embedded systems on general software systems.
[12] Jens Happe and Viktoria Firus. Using Stochastic Petri Nets to Predict Quality of Service Attributes of Component-Based Software Architectures. In Proceedings of the Tenth Workshop on Component Oriented Programming (WCOP2005), 2005. [ bib | .pdf | Abstract ]
The Quality of Service attributes of a software component heavily depend on its environment. For example, if a component uses a highly unreliable service, its own reliability is likely to decrease as well. This relation can be described with parametric contracts, which model the dependence between provided and required services of a component. Until now, parametric contracts can only model single-threaded systems. We plan to extend parametric contracts with Stochastic Petri nets to model multi-threaded systems. This enables the detection of resource conflicts and the consideration of the influence of concurrency on Quality of Service attributes, like performance.
[13] Heiko Koziolek. The Role of Experimentation in Sofware Engineering. In Research Methods in Software Engineering, Wilhelm Hasselbring and Simon Giesecke, editors, 2005, volume 1 of Trustworthy Software Systems, pages 11-33. GITO-Verlag, Berlin, 2006. 2005. [ bib | .pdf | Abstract ]
Software needs to be tested extensively before it is considered dependable and trustworthy. To guide testing, software developers often use an operational profile, which is a quantitative representation of how a system will be used. By documenting user inputs and their occurrence probabilities in such a profile, it can be ensured that the most used functions of a system are tested the most. Test cases can be generated directly out of an operational profile. Operational profiles are also a necessary part of quality-of-service prediction methods for software architectures, because these models have to include user inputs into their calculations. This paper outlines how operational profiles can be modelled in principle. Different kinds of usage descriptions of software system have been developed and are summarized in this article.
[14] Heiko Koziolek. Operational Profiles for Software Reliability. In Dependability Engineering, Wilhelm Hasselbring and Simon Giesecke, editors, 2005, volume 2 of Trustworthy Software Systems, pages 119-142. GITO-Verlag, Berlin, 2006. 2005. [ bib | .pdf | Abstract ]
Software needs to be tested extensively before it is considered dependable and trustworthy. To guide testing, software developers often use an operational profile, which is a quantitative representation of how a system will be used. By documenting user inputs and their occurrence probabilities in such a profile, it can be ensured that the most used functions of a system are tested the most. Test cases can be generated directly out of an operational profile. Operational profiles are also a necessary part of quality-of-service prediction methods for software architectures, because these models have to include user inputs into their calculations. This paper outlines how operational profiles can be modelled in principle. Different kinds of usage descriptions of software system have been developed and are summarized in this article.
[15] Heiko Koziolek and Viktoria Firus. Empirical Evaluation of Model-based Performance Predictions Methods in Software Development. In Proceeding of the first International Conference on the Quality of Software Architectures (QoSA'05), Ralf H. Reussner, Johannes Mayer, Judith A. Stafford, Sven Overhage, Steffen Becker, and Patrick J. Schroeder, editors, 2005, volume 3712 of Lecture Notes in Computer Science, pages 188-202. Springer-Verlag Berlin Heidelberg. 2005. [ bib | .pdf | Abstract ]
Predicting the performance of software architectures during early design stages is an active field of research in software engineering. It is expected that accurate predictions minimize the risk of performance problems in software systems by a great extent. This would improve quality and save development time and costs of subsequent code fixings. Although a lot of different methods have been proposed, none of them have gained widespread application in practice. In this paper we describe the evaluation and comparison of three approaches for early performance predictions (Software Performance Engineering (SPE), Capacity Planning (CP) and umlPSI). We conducted an experiment with 31 computer science students. Our results show that SPE and CP are suited for supporting performance design decisions in our scenario. CP is also able to validate performance goals as stated in requirement documents under certain conditions. We found that SPE and CP are matured, yet lack the proper tool support that would ease their application in practice.
[16] Marek Lesso, Lucia Kapova, and Rastislav Orsulak. Technologies for building information systems: ESZ Sybase - Administration system case study. In 5th PhD Student Conference, 2005. Kosice. [ bib ]
[17] Anne Martens. Empirical Validation and Comparison of the Model-Driven Performance Prediction Techniques of CB-SPE and Palladio. Carl-von-Ossietzky Universität Oldenburg, 2005. individual project thesis (similar to a Bachelor's thesis). [ bib | .pdf | Abstract ]
For the design of component based systems, it is important to guarantee non-functional attributes before actually composing a system. Performance usually is a crucial property of a software system, for safety or usability reasons. Several approaches to predict performance characteristics of a component based system in an early stage of development have been introduced in recent times. However, for an engineering discipline, not only the propose of techniques is needed, but also empirical studies of their applicability. This work empirically compares and evaluates two approaches to early predict the performance of component based software systems. The empirical study is conducted in form of a case study, although attempts are made to achieve a good generalizability. The results attest the CB-SPE technique a good applicability, although some problems occured. The Palladio technique has less good results. Here, there have been problems with the specification of the distribution functions.
[18] Jan Michlik, Lucia Kapova, and Frantisek Jakab. System for remote access to the laboratories - Virtual Lab. In 5th PhD Student Conference, 2005. Kosice. [ bib ]
[19] Pierre Parrend. Mde et cscw, groupware, travail cooperatif capillaire. Master's thesis, Ecole Centrale de Lyon, 2005. [ bib | http | Abstract ]
Collaborative applications let users work in a shared environment, to support common work. Model Driven Engineering (MDE), aims at separating the description of the application and the description of the target platform and architecture. Numerous patterns are available in the literrature, which may be reused in an MDE process for CSCW. However, they build just ponctual solutions, are may only be useful as a complement to an existing framework. Our proposition to solve this problem is to integrate ontologies in the MDE process, to represent functionnalities of collaborative applications, so as to generate MDE models for collaborative services, deduced from use scenarios. Several work groups have been recently created to study the role of ontologies in MDE on one hand, and in CSCW on the other hand : this is a growing research domain
[20] Pierre Parrend and Bertrand David. Use of ontologies as a way to automate mde processes. In IEEE EuroCon 2005, Belgrad, Serbia-Montenegro, 2005. [ bib | http | Abstract ]
Model Driven Engineering (MDE) knows growing interest as much as a research domain as an industry process for building software quickly and reliably. However, in the way to reuse and automation of design processes, it has limitation for this purpose, as it focuses on design much more as on user s need. Use of an ontology representing domain design knowledge can be a way to bridge the gap between use scenarios and models, and so to empower MDE approaches.
[21] Ralf Reussner, Jens Happe, and Annegreth Habel. Modelling Parametric Contracts and the State Space of Composite Components by Graph Grammars. In Fundamental Approaches to Software Engineering (FASE), 2005, volume 3442 of Lecture Notes in Computer Science, pages 80-95. Springer-Verlag Berlin Heidelberg. 2005. [ bib | .pdf | Abstract ]
Modeling the dependencies between provided and required services within a software component is necessary for several reasons, such as automated component adaptation and architectural dependency analysis. Parametric contracts for software components specify such dependencies and were successfully used for automated protocol adaptation and quality of service prediction. In this paper, a novel model for parametric contracts based on graph grammars is presented and a first definition of the compositionality of parametric contracts is given. Compared to the previously used finite state machine based formalism, the graph grammar formalism allows a more elegant formulation of parametric contract applications and considerably simpler implementations.
[22] Mircea Trifu and Peter Szulman. Language independent abstract metamodel for quality analysis and improvement of oo systems. GI Softwaretechnik-Trends, 25(2), 2005. [ bib ]

 TOP

Publications 2004

[1] Ralf H. Reussner, Steffen Becker, and Viktoria Firus. Component Composition with Parametric Contracts. In Tagungsband der Net.ObjectDays 2004, September 2004, pages 155-169. [ bib | .pdf | Abstract ]
We discuss compositionality in terms of (a) component interoperability and contractual use of components, (b) component adaptation and (c) prediction of properties of composite components. In particular, we present parametric component contracts as a framework treating the above mentioned facets of compositionality in a unified way. Parametric contracts compute component interfaces in dependency of context properties, such as available external services or the profile how the component will be used by its clients. Under well-specified conditions, parametric contracts yield interfaces offering interoperability to the component context (as they are component-specifically generated). Therefore, parametric contracts can be considered as adaptation mechanism, adapting a components provides- or requires-interface depending on connected components. If non-functional properties are specified in a component provides interface, parametric contracts compute these nonfunctional properties in dependency of the environment.
[2] Pierre Parrend and Isabelle Auge-Blum. Validation temporelle d'architectures embarquees pour l'automobile. Technical Report RR200, CITI Lab, INSA de Lyon, July 2004. [ bib | http | Abstract ]
Cette �tude se situe dans un contexte de fort d�veloppement des applications automobiles bas�es sur l �lectronique, en vue de remplacer certaines pi�ces m�caniques, tels les syst�mes de freinage, de direction. Le protocole le plus utilis� actuellement est CAN, mais il ne suffit pas aux applications n�cessitant un haut degr� de s�curit�. D autres protocoles ont donc �t� d�velopp�s, selon le paradigme Time- Triggered (selon un ordonnancement pr�-d�fini), comme TTA, Flexray. En effet, ce type de protocoles est plus facile � valider. De la rencontre de CAN et des protocoles Time-Triggered est issu TTCAN. C est ce protocole auquel nous allons nous int�resser. Il est indispensable pour un protocole destin� � des applications � haut niveau de s�curit� de disposer d un validation formelle. Nous allons �tudier son comportement temporel � l aide de l outil UPPAAL, qui permet l analyse, � l aide d une m�thode d�velopp�e au CITI. Nous pr�sentons la mod�lisation r�alis�e � fin d analyse, ainsi que les r�sultats obtenus. Ces donn�es nous permettent une comparaison syst�matique avec le protocole TTA, ce qui offre une mise en perspective critique des deux protocoles.
[3] Markus Bauer and Mircea Trifu. Combining clustering with pattern matching for architecture recovery of oo systems. GI Softwaretechnik-Trends, 24(2), May 2004. [ bib ]
[4] Markus Bauer and Mircea Trifu. Architecture-aware adaptive clustering of OO systems. In Proceedings of the 8-th European Conference on Software Maintenance and Reengineering, 2004, pages 3-14. IEEE. 2004. [ bib ]
[5] Steffen Becker. The Palladio Component Model. Technical report, University of Oldenburg, 2004. [ bib ]
[6] Steffen Becker, Viktoria Firus, Simon Giesecke, Wilhelm Hasselbring, Sven Overhage, and Ralf H. Reussner. Towards a Generic Framework for Evaluating Component-Based Software Architectures. In Architekturen, Komponenten, Anwendungen - Proceedings zur 1. Verbundtagung Architekturen, Komponenten, Anwendungen (AKA 2004), Universität Augsburg, Klaus Turowski, editor, 2004, volume 57 of GI-Edition Lecture Notes in Informatics, pages 163-180. [ bib | .pdf | Abstract ]
Abstract: The evaluation of software architectures is crucial to ensure that the design of software systems meets the requirements. We present a generic methodical framework that enables the evaluation of component-based software architectures. It allows to determine system characteristics on the basis of the characteristics of its constituent components. Basic prerequisites are discussed and an overview of different architectural views is given, which can be utilised for the evaluation process. On this basis, we outline the general process of evaluating software architectures and provide a taxonomy of existing evaluation methods. To illustrate the evaluation of software architectures in practice, we present some of the methods in detail.
[7] Steffen Becker, Sven Overhage, and Ralf H. Reussner. Classifying Software Component Interoperability Errors to Support Component Adaption. In Proceedings of the 7th International Symposium on Component-Based Software Engineering (CBSE 2004), Edinburgh, UK, Ivica Crnkovic, Judith A. Stafford, Heinz W. Schmidt, and Kurt C. Wallnau, editors, 2004, volume 3054 of Lecture Notes in Computer Science, pages 68-83. Springer-Verlag Berlin Heidelberg. 2004. [ bib | http | Abstract ]
This paper discusses various classifications of component interoperability errors. These classifications aim at supporting the automation of component adaptation. The use of software components will only demonstrate beneficial, if the costs for component deployment (i.e., acquisition and composition) are considerably lower than those for custom component development. One of the main reasons for the moderate progress in component-based software engineering are the high costs for component deployment. These costs are mainly caused by adapting components to bridge interoperability errors between unfitting components. One way to lower the costs of component deployment is to support component adaptation by tools, i.e., for interoperability checks of (semi-)automated adaptor generation. This automation of component adaptation requires a deep understanding of component interoperability errors. In particular, one has to differentiate between different classes of interoperability errors, as different errors require different adaptors for resolving. Therefore, the presented classification of component interoperability errors supports the automation of component adaptation by aiding automated interoperability problem detection and semi-automated adaptor generation. The experience gained from already implemented solutions for a specific class of interoperability errors provides hints for the solution of similar problems of the same class.
[8] Steffen Becker and Ralf H. Reussner. The Impact of Software Component Adaptors on Quality of Service Properties. In Proceedings of the First International Workshop on Coordination and Adaptation Techniques for Software Entities (WCAT 04), Carlos Canal, Juan Manuel Murillo, and Pascal Poizat, editors, 2004. [ bib | .pdf | Abstract ]
Component adaptors often are used to bridge gaps between the functional requirements of a component and the functional specification of another one supposed to provide the needed services. As bridging functional mismatches is necessary, the use of adaptors is often unavoidable. This emphasises the relevance of a drawback of adaptor usage: The alteration of Quality of Service properties of the adapted component. That is especially nasty, if the original QoS properties of the component have been a major criteria for the choice of the respective component. Therefore, we give an overview of examples of the problem and highlight some approaches how to cope with it.
[9] Viktoria Firus and Steffen Becker. Towards Performance Evaluation of Component Based Software Architectures. In Proceedings of Formal Foundation of Embedded Software and Component-Based Software Architectures (FESCA), 2004, volume 108 of Electronic Notes in Theoretical Computer Science, pages 118-121. [ bib | .pdf ]
[10] Karen Godary, Pierre Parrend, and Isabelle Auge-Blum. Comparison and temporal validation of automotive real-time architectures. Technical report, CITI Lab, INSA de Lyon, 2004. [ bib | http | Abstract ]
In the automotive domain, the X-by-wire systems are dedicated to critical and real-time applications. These systems have specific needs that must be ful- filled, in particular in the reliability domain. Fault-tolerant architectures have been designed to fit with these requirements : TTA, FlexRay or TTCAN. This paper presents a methodology of temporal validation, and illustrates it for the validation of TTA and TTCAN services. This validation provides some temporal bounds, that can be used for the comparison of these architectures.
[11] Jens Happe. Reliability Prediction of Component-Based Software Architectures. Master's thesis, University of Oldenburg, 2004. [ bib | .pdf | Abstract ]
From the user's point of view, the reliability of a software component depends on its environment as well as its usage profile. The environment of a component includes the external services invoked by the component and the hardware and software it is deployed on. The usage profile determines which services of the component are needed and describes all possible call sequences in form of a Markov model. The in�uence of the usage profile and the reliability of external services on the reliability of a component- based software architecture has been analysed in [38]. There, parametric contracts are used to determine the reliability of a component in its environment. Parametric contracts use so-called service e�ect specifications which describe the usage of external services by a service provided by the component to create a mapping between the provides- and requires interfaces of the same component. We extend the approach described there and consider the reliability of resources like devices (hardware) and execution environments (software). Therefore, we develop a mathematical model to determine the usage period of the resources depending on the usage profile. We compute the reliabilities of the resources on the basis of their usage period. This extends user orientation of software reliability towards system reliability. The consideration of the usage period of a resource requires a mathematical model to determine the execution time of a service. Based on parametric contracts, we develop two approaches to compute the execution time of a service. The first approach makes use of the properties of Markov chains and yields the expected (or average) execution time of a service. The second approach is an extension of parametric performance contracts [37] which describe the execution time of a service in form of a probability density function. We overcome the limits of the approach described there and give a mathematical model to determine the execution time of a loop based on the discrete Fourier transform. Furthermore, we describe how parametric performance contracts can be applied using regular expressions. Furthermore, both computational models are modified to deal with the usage periods of the system resources. The computation of the resource reliability based on the usage period is discussed as well. We use a component-based webserver recently developed in the context of the Palladio project [34] to evaluate some of the predictions made by our model.
[12] W. Hasselbring, Ralf H. Reussner, H. Jaekel, J. Schlegelmilch, T. Teschke, and S. Krieghoff. The Dublo Architecture Pattern for Smooth Migration of Business Information Systems: An Experience Report. In ICSE '04: Proceedings of the 26th International Conference on Software Engineering, 2004, pages 117-126. IEEE Computer Society, Washington, DC, USA. 2004. [ bib | http | Abstract ]
While the importance of multi-tier architectures for enterpriseinformation systems is widely accepted and theirbenefits are well published, the systematic migration frommonolithic legacy systems toward multi-tier architectures isknown to a much lesser extent. In this paper we present apattern on how to re-use elements of legacy systems withinmulti-tier architectures, which also allows for a smooth migrationpath. We report on experience we made with migratingexisting municipal information systems towards a multitierarchitecture. The experience is generalized by describingthe underlying pattern such that it can be re-used forsimilar architectural migration tasks. The emerged Dublopattern is based on the partial duplication of businesslogic among legacy system and newly deployed applicationserver. While this somehow contradicts the separation-of-concernsprinciple, it offers a high degree of flexibility inthe migration process and allows for a smooth transition.Experience with the combination of outdated databasetechnology with modern server-side component and webservices technologies is discussed. In this context, we alsoreport on technology and architecture selection processes.
[13] Frantisek Jakab, Jan Michlik, Lucia Kapova, and Juraj Galba. VirtualLab. In 3nd International Conference on Emerging Telecommunications Technologies and Applications, 2004. ELFA, Kosice. 2004. [ bib ]
[14] Heiko Koziolek. Empirische Bewertung von Performance-Analyseverfahren für Software-Architekturen. Master's thesis, Universität Oldenburg, 2004. [ bib ]
[15] Heiko Koziolek. Empirical Evaluation of Performance-Analysis methods for Software Architectures. http://sdqweb.ipd.kit.edu/publications/pdfs/koziolek2004b.pdf, 2004. Partial English translation of the original Master's thesis “Empirische Bewertung von Performance-Analyseverfahren fürSoftware-Architekturen”, Universität Oldenburg. [ bib ]
[16] Benedikt Kratz, Ralf Reussner, and Willem-Jan van den Heuvel. Empirical Research on Similarity Metrics for Software Component Interfaces. J. Integr. Des. Process Sci., 8(4):1-17, 2004, IOS Press, Amsterdam, The Netherlands, The Netherlands. [ bib | Abstract ]
The notions of design and process cut across many disciplines. Applications of abstract notions of design and process to engineering problem solving would certainly redefine and expand the notion of engineering itself in the 21st century. This Journal of SDPS strives to be the repository of human knowledge covering interdisciplinary notions of design and process in a rigorous fashion. We expect and encourage papers crossing the boundaries back and forth in mathematical landscape as well as among mathematics, physics, economics, management science, and engineering. Journal of Integrated Design and Process Science is an archival, peer-reviewed technical journal publishing the following types of papers: a) Research papers, b) Reports on case studies, c) Reports on major design and process projects, d) Design and process standards and proposals, and e) Insightful tutorials on design and process. It has been observed that most of the work related to design and process is interdisciplinary and until recently has been scattered in journals of many diverse disciplines. The objective on this journal is to publish state-of-the-art papers in this expanding field, providing an international and interdisciplinary forum for best work in design and process related areas. The audience of this journal will have a single source to stay current on new and quality work as academic research papers and synthesis on best-practices. Consistent with SDPS philosophy, the Journal strives to maintain an international and interdisciplinary balance by relying on experts from various corners of the world. Authors whose work are in the domain of interdisciplinary no-man's land with a flavor of design and process are encouraged to submit their papers to this Journal. The readership of this journal includes participants from academia and industry.
[17] Klaus Krogmann. Generierung von Adaptoren. Individual project, University of Oldenburg, Germany, 2004. [ bib | .pdf | Abstract ]
Die vorliegende Ausarbeitung zum Individuellen Projekt bietet einen Einblick in die Erzeugung von Adaptoren auf Signaturniveau. Dabei wird eine differenzierte Abgrenzung zu anderen Gebieten der Adaptererzeugung vorgenommen, die sich unter anderem im einleitenden Abschnitt der Arbeit wiederfindet. Im Mittelpunkt der Betrachtung, die aus der Sicht der Komponentenbasierten Softwareentwicklung vorgenommen wird, steht vor allem der Erzeugungsvorgang von Adaptoren für Komponenten. Die Erzeugung von Adaptoren hängt unter anderem maßgeblich davon ab, wie viele Informationen über die zu adaptierenden Komponenten vorliegen. Betrachtet werden verschiedene Niveaus des Informationsreichtums, die aus unterschiedlicher wissenschaftlicher Sicht zu verschiedenen Ergebnissen führen können. Für den Kontext des erstellten Adapter-Generators wird vor allem das Konzept der Konverter näher beleuchtet. Durch diesen vielseitigen Mechanismus läßt sich die Mächtigkeit der erzeugbaren Adapter drastisch erhöhen. Aufgezeigt wird neben der konkreten Implementierung auch ein Vergleich mit anderen Adaptionsansätzen.
[18] Jürgen Meister, Ralf H. Reussner, and Martin Rhode. Applying Patterns to Develop a Product Line Architecture for Statistical Analysis Software. In Proceedings of the Fourth Working IEEE/IFIP Conference on Software Architecture (WICSA 4), 2004. IEEE/IFIP. 2004. [ bib ]
[19] Jürgen Meister, Ralf H. Reussner, and Martin Rhode. Managing Product Line Variability by Patterns. In Proceedings of the 5th Annual International Conference on Object-Oriented and Internet-based Technologies, Concepts, and Applications for a Networked World (Net.Objectdays 2004), 2004, Lecture Notes in Computer Science. Springer-Verlag Berlin Heidelberg. 2004. [ bib ]
[20] Jan Michlik, Frantisek Jakab, and Lucia Kapova. Project Virtual Lab. In Annual Cisco Systems Networking Conference 2004, 2004. Stara Lesna. [ bib ]
[21] Ralf H. Reussner. The Working Group Software Architecture of the German Computer Science Society (GI-AK SoftArch)Report on the Workshop on Formal Foundations of Embedded Software and Component-based Software Architectures (FESCA). Newsletter of the European Association of Software Science and Technology (EASST), 2004. [ bib ]
[22] Ralf H. Reussner. Home Page of the Palladio Research Group. http://se.informatik.uni-oldenburg.de/research/current/Palladio , 20.12.2004, 2004. [ bib ]
[23] Ralf H. Reussner. The Role of the Software Architect: The Software Architecture Memorandum of the Sylter Runde. Newsletter of the European Association of Software Science and Technology (EASST), 2004. [ bib ]
[24] Ralf H. Reussner, Viktoria Firus, and Steffen Becker. Parametric Performance Contracts for Software Components and their Compositionality. In Proceedings of the 9th International Workshop on Component-Oriented Programming (WCOP 04), Wolfgang Weck, Jan Bosch, and Clemens Szyperski, editors, 2004. [ bib | .pdf | Abstract ]
The performance of a software component heavily depends on the environment of the component. As a software component only justifies its investment when deployed in several environments, one can not specify the performance of a component as a constant (e.g., as a single value or distribution of values in its interface). Hence, classical component contracts allowing to state the component�s performance as a post-condition, if the environment realises a specific performance stated in the precondition, do not help. This fixed pair of pre- and postcondition do not model that a component can have very different performance figures depending on its context. Instead of that, parametric contracts are needed for specifying the environmental dependency of the component�s provided performance. In this paper we discuss the specification of such dependencies for the performance metric response time. We model the statistical distribution of response time in dependency of the distribution of response times of environmental services.
[25] Ralf H. Reussner, Juliana Küster-Filipe, Iman H. Poernomo, and Sandeep Shukla. Report on the Workshop on Formal Foundations of Embedded Software and Component-based Software Architectures (FESCA). Technical report, 2004. [ bib ]
[26] Dan A. Simovici, Namita Singla, and Michael Kuperberg. Metric Incremental Clustering of Nominal Data. In The Fourth IEEE International Conference on Data Mining, 2004, pages 523-526. Brighton, UK. [ bib | .pdf | Abstract ]
We present an algorithm for clustering nominal data that is based on a metric on the set of partitions of a finite set of objects; this metric is defined starting from a lower valuation of the lattice of partitions. The proposed algorithm seeks to determine a clustering partition such that the total distance between this partition and the partitions determined by the attributes of the objects has a local minimum. The resulting clustering is quite stable relative to the ordering of the objects.
[27] Michael Teuffel and Robert Vaupel. Das Betriebssystem z/OS und die zSeries. Darstellung eines modernen Großrechnersystems. Oldenbourg, München, 2004. [ bib ]
[28] Robert Vaupel. Managing Workloads in z/OS. IBM Systems Magazine, January 2004. [ bib ]

 TOP

Publications 2003

[1] Mircea Trifu. Architecture-aware adaptive clustering of object-oriented systems. Master's thesis, "Politehnica" University of Timisoara, Romania, September 2003. [ bib ]
[2] Jasminka Matevska-Meyer, Wilhelm Hasselbring, and Ralf H. Reussner. Exploiting Protocol Information for Speeding up Runtime Reconfiguration of Component-Based Systems. In Proceedings of the Eigth International Workshop on Component-Oriented Programming (WCOP'03), Wolfgang Weck, Jan Bosch, and Clemens Szyperski, editors, June 2003. [ bib | .pdf | Abstract ]
To reduce the down-time of software systems and maximise the set of available services during reconfiguration, we propose exploiting component protocol information. This is achieved by knowing the state of a running system and determining the component dependencies for the time interval from receiving a reconfiguration request until reconfiguration completion. For this forecast we use the architectural descriptions that specify static dependencies, as well as component protocol information. By considering only component interactions for the time interval of reconfiguration we can exclude past and future dependencies from our runtime dependency graphs. We show that such change-request-specific runtime dependency graphs may be considerably smaller than the corresponding static architecture based dependency graphs; this way, we are speeding up runtime reconfiguration of component-based systems while maximising the set of available services.
[3] Ralf H. Reussner. Contracts and Quality Attributes of Software Components. In Proceedings of the Eighth International Workshop on Component-Oriented Programming (WCOP'03), Wolfgang Weck, Jan Bosch, and Clemens Szyperski, editors, June 2003. [ bib ]
[4] Ralf H. Reussner, Iman H. Poernomo, and Heinz W. Schmidt. Contracts and Quality Attributes for Software Components. In Proceedings of the Eigth International Workshop on Component-Oriented Programming (WCOP'03), Wolfgang Weck, Jan Bosch, and Clemens Szyperski, editors, June 2003. [ bib | .pdf ]
[5] Steffen Becker. Konfigurationsmanagement komponentenorientierter betrieblicher Anwendungen. Master's thesis, TU Darmstadt, 2003. [ bib | .pdf | Abstract ]
Diese Diplomarbeit beschäftigt sich mit Fragen des Konfigurationsmanagements bei der betrieblichen Anwendungsentwicklung mit Softwarekomponenten. Besondere Schwerpunkte liegen dabei in der Generierung von Konfigurationen und deren systematischer Bewertung im Bezug auf ihre Nutzen und ihre Kosten. Die wichtigsten Elemente bei der Durchführung des Konfigurationsmanagements werden in Form von Prozessen dargestellt, darunter ein Prozess zur Suche nach Konfigurationen sowie deren Nutzenbewertung mit Hilfe des AHP.
[6] Steffen Becker, Erich Ortner, and Sven Overhage. Der Komponentenansatz - ein Riesenschritt für unsere geistige Entwicklung, 2003. [ bib ]
[7] Steffen Becker and Sven Overhage. Stücklistenbasiertes Komponenten-Konfigurationsmanagement. In Tagungsband 5. Workshop Komponentenorientierte betriebliche Anwendungssysteme, Klaus Turowski, editor, 2003, pages 17-32. Universität Augsburg. 2003. [ bib | .pdf | Abstract ]
In diesem Beitrag wird ein Konzept für das Konfigurationsmanagement komponentenorientierter Anwendungen dargestellt. Dabei wird zunächst der Begriff �Konfigura-tionsmanagement� näher erläutert und anschließend die Stücklistenorganisation als eine geeignete Methode für das Konfigurationsmanagement beschrieben. Der Beitrag konzentriert sich auf die Entwicklung einer Vorgehensweise zur (automatisierten) Unterstützung der Komponentenaus-wahl, die auf Stücklisten, einem einheitlichen Spezifikationsrahmen und einer multiattributiven Entscheidungsfindung basiert. Abschließend wird das änderungsmanagement beschrieben, das ebenfalls zum Konfigurationsmanagement zu zählen ist.
[8] Steffen Becker, Ralf H. Reussner, and Viktoria Firus. Specifying Contractual Use, Protocols and Quality Attributes for Software Components. In Proceedings of the First International Workshop on Component Engineering Methodology, Klaus Turowski and Sven Overhage, editors, 2003. [ bib | .pdf | Abstract ]
We discuss the specification of signatures, protocols (behaviour) and quality of service within software component specification frameworks. In particular we focus on (a) contractually used components, (b) the specification of components with variable contracts and interfaces, and (c) of quality of service. Interface descriptions including these aspects allow powerful static interoperability checks. Unfortunately, the specification of constant component interfaces hinders the specification of quality attributes and impedes automated component adaptation. This is because, especially quality attributes heavily depend on the components context. To enable the specification of quality attributes, we demonstrate the inclusion of parameterised contracts within a component specification framework. These parameterised contracts compute adapted, context-dependent component interfaces (including protocols and quality attributes). This allows to take context dependencies into account while allowing powerful static interoperability checks.
[9] J. Boehringer, P.Fischer, H. Koziolek, U.Maurer, and S. Schulze. Method and system for session sharing, 2003. [ bib | .pdf | Abstract ]
Today users often have problems to work with complex Web Applications and Web Portals. Although most the Web Applications provide online help, the users often need human support to solve their problems. Today this can be achieved by using telephone support or an external remote administration software. Both options have serious limitations. The telephone support is provided by an operator who cannot see what happens on the user�s screen. The external remote administration software is not always available on the Client side and can introduce security vulnerabilities and adds more complexity.
[10] Stefan Brüggemann, Jens Happe, Stefan Hildebrandt, Sascha Olliges, Heiko Koziolek, Florian Krohs, Philipp Sandhaus, Rico Starke, Christian Storm, Timo Warns, and Stefan Willer. Komponentenmarktplatz für Enterprise Java Beans. In BTW Studierenden-Programm, Dresden, Germany, September, 2003, 2003, pages 56-58. [ bib | .pdf ]
[11] I. Poernomo, Ralf H. Reussner, and H. W. Schmidt. Architectural Configuration with EDOC and .NET Component Services. In Euromicro 2003, IEEE, Antalya - Turkey, September 3rd-5th, 2003, Gerhard Chroust, editor, 2003. [ bib ]
[12] Ralf H. Reussner. Automatic Component Protocol Adaptation with the CoCoNut Tool Suite. Future Generation Computer Systems, 19(5):627-639, 2003. [ bib | Abstract ]
The purpose of this tutorial is to provide concepts and historical background of the �network integration testing� (NIT) methodology. NIT is a �grey box� testing technique that is aimed at verifying the correct behaviour of interconnected networks (operated by different operators) in provisioning services to end users, or the behaviour of a complex network operated by a unique operator. The main technical concepts behind this technique are presented along with the history of some International projects that have contributed to its early definition and application. European Institute for Research and Strategic Studies in Telecommunication (EURESCOM) has actually been very active, with many projects, in defining the NIT basic methodology and providing actual NIT specifications (for narrow-band and broad-band services, covering both voice and data). EURESCOM has also been acting as a focal point in the area, e.g., encouraging the Industry in developing commercial tools supporting NIT. In particular, the EURESCOM P412 project (1994�1996) first explicitly defined the NIT methodology (the methodological aspects include test notation, test implementation, test processes, distributed testing and related co-ordination aspects). P412 applied the methodology to ISDN whilst another project, P410, applied NIT to data services. The P613 project (1997�1999) extended the basic NIT methodology to the broad band and GSM. More into details, the areas covered currently by NIT test specifications developed by EURESCOM projects include N-ISDN, N-ISUP, POTS, B-ISDN, B-ISUP, IP over ATM, ATM/FR, GSM, focusing also on their �inter-working� cases (e.g., ISDN/ISDN, ISDN/GSM, etc.). ETSI, the European Telecommunication Standards Institute, also contributed to NIT development (e.g., the definition of the TSP1+ protocol, used for the functional co-ordination and timing synchronisation of all tools involved in a distributed testing session). The paper also discusses NIT in relation to the recent major changes (processes) within the telecommunication (TLC) community. Beyond the new needs coming from the pure technical aspects (integration of voice and data, fixed mobile convergence, etc.) the full deregulation of the TLC sector has already generated new processes and new testing needs (e.g., Interconnection Testing) that had a significant influence on the methodology. NIT is likely to continue to develop further in the future according to the needs of telecom operators, authorities, user�s associations and suppliers.
[13] Ralf H. Reussner. Using SKaMPI for Developing High-Performance MPI Programs with Performance Portability. Future Generation Computer Systems, 19(5):749-759, 2003. [ bib ]
[14] Ralf H. Reussner, Iman H. Poernomo, and Heinz W. Schmidt. Reasoning on Software Architectures with Contractually Specified Components. In Component-Based Software Quality: Methods and Techniques, A. Cechich, M. Piattini, and A. Vallecillo, editors, volume 2693 of Lecture Notes in Computer Science, pages 287-325. Springer-Verlag Berlin Heidelberg, 2003. [ bib | Abstract ]
One of the motivations for specifying software architectures explicitly is the better prediction of system quality attributes. In this chapter we present an approach for determining the reliability of component-based software architectures. Our method is based on RADL (Rich Architecture Definition Language), an extension of DARWIN [16]. RADL places special emphasis on component interoperation and, in particular, on accounting for the effects of interoperation on system reliability. To achieve this, our methods use a notion of design-by-contract [19] for components, called parameterized contracts [26]. Our contracts involve finite state machines that allow software architects to define how a componentrsquos reliability will react to a deployment environment. We show how a system, built from contractually specified components, can be understood in terms of Markov models, facilitating system reliability analysis. We illustrate our approach with an e-commerce example and report about empirical measurements which confirm our analytical reliability prediction by means of monitoring in our reliability testbed.
[15] Ralf H. Reussner, Heinz W. Schmidt, and Iman Poernomo. Reliability Prediction for Component-Based Software Architectures. Journal of Systems and Software - Special Issue of Software Architecture - Engineering Quality Attributes, 66(3):241-252, 2003. [ bib | Abstract ]
Due to the increasing size and complexity of software systems, software architectures have become a crucial part in development projects. A lot of effort has been put into defining formal ways for describing architecture specifications using Architecture Description Languages (ADLs). Since no common ADL today offers tools for evaluating the performance, an attempt to develop such a tool based on an event-based simulation engine has been made. Common ADLs were investigated and the work was based on the fundamentals within the field of software architectures. The tool was evaluated both in terms of correctness in predictions as well as usability to show that it actually is possible to evaluate the performance using high-level architectures as models.

 TOP

Publications 2002

[1] Ralf H. Reussner. Counter-Constraint Finite State Machines: A new Model for Resource-bounded Component Protocols. In Proceedings of the 29th Annual Conference in Current Trends in Theory and Practice of Informatics (SOFSEM 2002), Milovy, Tschechische Republik, Bill Grosky, Frantisek Plasil, and Ales Krenek, editors, November 2002, volume 2540 of Lecture Notes in Computer Science, pages 20-40. Springer-Verlag Berlin Heidelberg. November 2002. [ bib ]
[2] D. Dillenberger, S. Heisig, I. Salm, and Robert Vaupel. Managing Isochronous Processes in a Heterogenous Work Environement. Patent No. 6470406, United States, October 2002. [ bib ]
[3] Iman H. Poernomo, Ralf H. Reussner, and Heinz W. Schmidt. Architectures of Enterprise Systems: Modelling Transactional Contexts. In Proceedings of the First IFIP/ACM Working Conference on Component Deployment (CD 2002), June 2002, volume 2370 of Lecture Notes in Computer Science, pages 233-243. Springer-Verlag Berlin Heidelberg. June 2002. [ bib ]
[4] Matthias Clauss, Elke Pulvermüller, Ralf H. Reussner, Andreas Speck, and Ragnhild van der Straeten. Model-based Software Reuse. In ECOOP '02 Reader, Lecture Notes in Computer Science. Springer-Verlag Berlin Heidelberg, 2002. [ bib | Abstract ]
In a model based software system, a set of business rules is scanned, and patterns are identified. The patterns are then compared, and similarities identified which indicate that software can be reused in the system. In one embodiment, identifiers of the rules are scanned. In another embodiment, usage patterns are used for designing a middle layer and generating code. In another embodiment of the invention, a data model is generated by capturing data from a user interface for a business document. Read more: http://www.faqs.org/patents/app/20090024980#ixzz0Wdx5gzQS
[5] Jane Christy Jayaputera, Iman H. Poernomo, Ralf H. Reussner, and Heinz W. Schmidt. Timed Probabilistic Reasoning on Component Based Architectures. In Proceedings of the Third australian workshop on computational logic (AWCL 2002), Canberra, Australia, December 2002 Australian National University, Harald Sondergaad, editor, 2002. [ bib ]
[6] Erik Kamsties, Antje von Knethen, and Ralf H. Reussner. A Controlled Experiment on the Understandability of Different Requirements Specifications Styles. In Proceedings of the Eighth International Workshop on Requirements Engineering: Foundation for Software Quality, 2002. [ bib | .pdf | Abstract ]
In this paper, we report on a controlled experiment, in which we compared two different requirements specification styles. Following the traditional black-box style, a system is described by its externally visible behavior, any design detail is omitted from the requirements. Following the white-box style, which was popularized by object-oriented analysis, a system is described by the behavior of its constituent entities, e.g., objects. In the experiment, we compared the understandability of two requirements specifications of the same system each written in a different style. The appropriate choice of a specification style depends on several factors including the project characteristics, the nature of the requirements at hand, and the intended readers. In this paper, we focus on the last factor, and investigate understandability from the viewpoint of a customer. The results of the experiment indicate that it is easier to understand black-box requirements specifications from a customer point of view. Questions about particular functions and particular behavior of the specified system were answered by the participants faster and more correct. This result suggests using the black-box specification style when communication with customers is important.
[7] Bernd J. Krämer, Ralf H. Reussner, and Heinz W. Schmidt. Predicting Properties of Component Based Software Architectures through Parameterised Contracts. In Monterey Workshop 2002 - Radical Innovations of Software and Systems Engineering, Venice, Italy, October 7-11, Martin Wirsing, editor, 2002, Lecture Notes in Computer Science. Springer-Verlag Berlin Heidelberg. 2002. [ bib ]
[8] Bernd J. Krämer, Heinz W. Schmidt, Iman H. Poernomo, and Ralf H. Reussner. Predictable COmponent Architectures Using Dependent Finite State Machines. In Radical Innovations of Software and Systems Engineering in the Future, 9th International Workshop, RISSEF 2002, Venice, Italy, October 7-11, 2002, Revised Papers, Martin Wirsing, Alexander Knapp, and Simonetta Balsamo, editors, 2002, pages 310-324. Springer-Verlag Berlin Heidelberg. 2002. [ bib ]
[9] Ralf H. Reussner. Counter-Constraint Finite State Machines: Modelling Component Protocols with Resource-Dependencies. Technical Report 2002/121, School for Computer Science and Software Engineering, Monash University, 2002. [ bib | Abstract ]
This paper deals with the specification of software component protocols (i. e., the set of service call sequences). The contribution of this paper is twofold: (a) We discuss specific requirements of realworld protocols, especially in the presence of components which make use of limited resources. (b) We define counter-constrained finite state machines, a novel extension of finite state machines, specifically created to model protocols containing dependencies between services due to their access to shared resources. Opposed to other approaches like classical finite state machines, this newmo del combines two valuable properties: (a) it is powerful enough to model realistic component protocols with resource allocation, -usage, and -deallocation dependencies between methods (as occurring in common abstract data-types such as stacks or queues) and (b) allows effcient checking of interoperability and substitutability.
[10] Ralf H. Reussner, Iman H. Poernomo, and John C. Grundy. Proceedings of the Fourth Australasian Workshop on Software and Systems Architectures. Technical report, 2002. [ bib ]
[11] Ralf H. Reussner, Iman H. Poernomo, and Heinz W. Schmidt. Using the TrustME Tool Suite for Automatic Component Protocol Adaptation. In Computational Science-ICCS 2002, Proc. of ICCS 2002, International Conference on Computational Science, Amsterdam, The Netherlands, 2002, P. Sloot, J. J. Dongarra, and C. J. K. Tan, editors, 2002, volume 2330 of Lecture Notes in Computer Science, pages 854-862. [ bib | Abstract ]
The deployment of component oriented software approaches gains increasing importance in the computational sciences. Not only the promised increase of reuse makes components attractive, but also the possibilities of integrating different stand-alone programs into a distributed application. Middleware platforms facilitate the development of distributed applications by providing services and infrastructure. Component developers can thus benefit from a common standard to shape components towards and application designers from using pre-fabricated software components and shared platform services. Although such platforms claim to achieve fast and flexible development of distributed systems, they fall short in key requirements to reliability and interoperability in loosely coupled distributed systems. For example, many interoperability errors remain undetected during development and the adaptation and integration of of third-party components still requires major effort and cost. Partly this problem can be alleviated by the use of formal approaches to automatic interoperability checks and component adaptation. Our Reliable Architecture Description Language (RADL) is aimed at precisely this problem. In this paper we present key aspects of RADL used to specify component-based, compositional views of distributed applications. RADL involves a rich component model, enabling protocol information to be contained in interfaces. We focus on protocol-based notions of interoperability and adaptation, important for the construction of distributed systems with loosely coupled components.
[12] Ralf H. Reussner and Heinz W. Schmidt. Using Parameterised Contracts to Predict Properties of Component Based Software Architectures. In Workshop On Component-Based Software Engineering (in association with 9th IEEE Conference and Workshops on Engineering of Computer-Based Systems), Lund, Sweden, 2002, Ivica Crnkovic, Stig Larsson, and Judith Stafford, editors, 2002. [ bib | .pdf ]
[13] Heinz W. Schmidt and Ralf H. Reussner. Generating Adapters for Concurrent Component Protocol Synchronisation. In Proceedings of the Fifth IFIP International Conference on Formal Methods for Open Object-Based Distributed Systems, 2002. [ bib | .pdf | Abstract ]
In general few components are reused as they are. Often, available components are incompatible with what is required. This necessitates component adaptations or the use of adapters between components. In this paper we develop algorithms for the synthesis of adapters, coercing incompatible components into meeting requirements. We concentrate on adapters for concurrent systems, where adapters are able to resolve synchronisation problems of concurrent components. A new interface model for components, which includes protocol information, allows us to generate these adapters semi-automatically.
[14] Heinz W. Schmidt and Ralf H. Reussner. Parameterised Contracts and Adaptor Synthesis. In Proceedings of the ICSE Workshop of Component Oriented Software Engineering (CBSE5), 2002. IEEE. 2002. [ bib ]
[15] Thomas Worsch, Ralf Reussner, and Werner Augustin. On Benchmarking Collective MPI Operations. In Proceedings of the 9th European PVM/MPI Users' Group Meeting on Recent Advances in Parallel Virtual Machine and Message Passing Interface, pages 271-279. Springer-Verlag Berlin Heidelberg, London, UK, 2002. [ bib ]

 TOP

Publications 2001

[1] Heinz W. Schmidt, Iman Poernomo, and Ralf H. Reussner. Trust-By-Contract: Modelling, Analysing and Predicting Behaviour in Software Architectures. Journal of Integrated Design and Process Science, 5(3):25-51, September 2001. [ bib | Abstract ]
The increasing pressure for enterprises to join into agile business networks is changing the requirements on the enterprise computing systems. The supporting infrastructure is increasingly required to provide common facilities and societal infrastructure services to support the lifecycle of loosely-coupled, eContract-governed business networks. The required facilities include selection of those autonomously administered business services that the enterprises are prepared to provide and use, contract negotiations, and furthermore, monitoring of the contracted behaviour with potential for breach management. The essential change is in the requirement of a clear mapping between business-level concepts and the automation support for them. Our work has focused on developing B2B middleware to address the above challenges; however, the architecture is not feasible without management facilities for trust-aware decisions for entering business networks and interacting within them. This paper discusses how trust-based decisions are supported and positioned in the B2B middleware.
[2] Ralf H. Reussner. The Use of Parameterised Contracts for Architecting Systems with Software Components. In Proceedings of the Sixth International Workshop on Component-Oriented Programming (WCOP'01), Wolfgang Weck, Jan Bosch, and Clemens Szyperski, editors, June 2001. [ bib | .pdf ]
[3] Ralf H. Reussner. Adapting Components and Predicting Architectural Properties with Parameterised Contracts. In Tagungsband des Arbeitstreffens der GI Fachgruppen 2.1.4 und 2.1.9, Bad Honnef, Wolfgang Goerigk, editor, 2001, pages 33-43. [ bib | .pdf ]
[4] Ralf H. Reussner. Recent Advances in SKaMPI. In High Performance Computing in Science and Engineering 2000, E. Krause and W. Jäger, editors, Transactions of the High Performance Computing Center Stuttgart (HLRS), pages 520-530. Springer-Verlag Berlin Heidelberg, 2001. [ bib | .pdf ]
[5] Ralf H. Reussner. Parametrisierte Verträge zur Protokolladaption bei Software-Komponenten. Logos Verlag, Berlin, 2001. [ bib ]
[6] Ralf H. Reussner. Parametrisierte Verträge zur Protokolladaption bei Software-Komponenten. Phd. thesis, Department of Informatics, University of Karlsruhe, 2001. [ bib ]
[7] Ralf H. Reussner. Enhanced Component Interfaces to Support Dynamic Adaption and Extension. In 34th Hawaiin International Conference on System Sciences, 2001. IEEE. 2001. [ bib | .pdf ]
[8] Ralf H. Reussner and Gunnar T. Hunzelmann. Achieving Performance Portability with SKaMPI for High-Performance MPI Programs. In Computational Science-ICCS 2001, Proc. of ICCS 2001, International Conference on Computational Science, Part II, Special Session on Tools and Environments for Parallel and Distributed Programming, San Francisco, CA, 2001, V. N. Alexandrov, J. J. Dongarra, B. A. Juliano, R. S. Renner, and C. J. K. Tan, editors, 2001, volume 2074 of Lecture Notes in Computer Science, pages 841-850. [ bib | .pdf | Abstract ]
Current development processes for parallel software often fail to deliver portable software. This is because these processes usually require a tedious tuning phase to deliver software of good performance. This tuning phase often is costly and results in machine specific tuned (i.e., less portable) software. Designing software for performance and portability in early stages of software design requires performance data for all targeted parallel hardware platforms. In this paper we present a publicly available database, which contains data necessary for software developers to design and implement portable and high performing MPI software.
[9] Ralf H. Reussner, Peter Sanders, and Jesper Larsson Träff. SKaMPI: a comprehensive benchmark for public benchmarking of MPI. Scientific Computing, 2001. [ bib | .pdf ]
[10] Ralf H. Reussner, Peter Sanders, and Jesper Larsson Träff. Multi-Platform Benchmarking of the MPI Communications Interface. In International Workshop on Performance-oriented Application Development for Distributed Architectures, 2001. [ bib | .pdf ]

 TOP

Publications 2000

[1] Ralf H. Reussner. An Enhanced Model for Component Interfaces to Support Automatic and Dynamic Adaption. In New Issues in Object Interoperability - Proceedings of the ECOOP' 2000 Workshop on Object Interoperability, J. Hernández, A. Vallecillo, and J. M. Troya, editors, June12-6 2000, pages 33-42. Published by Universidad de Extremadura Dpto. Informática. [ bib | .pdf | Abstract ]
This paper presents a new model of software component interfaces, using an extension of finite states machines to describe (a) the protocol to use a component�s offered services, and (b) the sequences of calls to the external services the component requires to fulfill its offered services. With this model we integrate information into the interface of a software component to: (a) Check whether a component will be used correctly in its environment during system integration (i.e., before the component is actually used). (b) Adapt the interface of a component which describes the component�s offered services, in case the environment does not offer all resources the component requires to offer all its services. In this case the adapted component still offers a subset of its services, to the contrary of todays component systems, which do not allow any integration in this case at all.
[2] Ralf H. Reussner. Formal Foundations of Dynamic Types for Software Components. Technical Report 08/2000, Department of Informatics, Universität Karlsruhe, 2000. [ bib | .pdf ]
[3] Ralf H. Reussner. Parameterised Contracts for Software-Component Protocols. Presentation given at Oberon Microsystems, Zürich, 2000. [ bib | .ps.gz ]
[4] Ralf H. Reussner, Jesper L. Träff, and Gunnar Hunzelmann. A Benchmark for MPI Derived Datatypes. In Recent advances in parallel virtual machine and message passing interface: 7th European PVM/MPI Users' Group Meeting, Balatonfüred, Hungary, September 10-13, 2000, J. J. Dongarra, P. Kacsuk, and N. Podhorszki, editors, 2000, volume 1908 of Lecture Notes in Computer Science, pages 10-18. [ bib | http ]
[5] Heinz W. Schmidt and Ralf H. Reussner. Automatic Component Adaptation By Concurrent State Machine Retrofitting. Technical Report 25/2000, School of Computer Science and Software Engineering, Monash University, Melbourne, Australia, 2000. This report appeared simultaneously as Technical Report No. 2000/81 of the School of Computer Science and Software Engineering, Monash University, Melbourne, Australia. [ bib ]

 TOP

Publications 1999

[1] Dirk Heuzeroth and Ralf H. Reussner. Dynamic Coupling of Binary Components and its Technical Support. In First Workshop on Generative and Component based Software Engineering (GCSE) - Young Researchers Workshop, 1999. [ bib | .pdf | Abstract ]
The aim of todays software development is to build applications by the reuse of binary components. This requires the composition of components and as special cases component enhancement as well as adaption. We demonstrate how to deal with these cases by furnishing components with a type consisting of two protocols � a call and a use protocol. We model these protocols by finite automata and show how those reflect component enhancement and adaption. This mechanism allows for automatic adaption of components in changing environments. In order to obtain binary components we have to compile corresponding sources. In view of the required features of the binary components and with the problems of compiling generic classes in mind, we describe an approach to generate such pre-compiled components by appropriate compiler extensions.
[2] Antje von Knethen, Erik Kamsties, Ralf H. Reussner, Christian Bunse, and Bin Shen. Une étude comparative de méthodes industrielles d'ingénierie des exigences. Génie Logiciel, 50:8-15, 1999. [ bib ]
[3] Dieter Kranzlmüller, Ralf H. Reussner, and Christian Schaubschläger. Monitor overhead measurement with SKaMPI. In Recent advances in parallel virtual machine and message passing interface: 6th European PVM/MPI Users' Group Meeting, Barcelona, Spain, September 26-29, 1999: proceedings, J. J. Dongarra, E. Luque, and Tomas Margalef, editors, 1999, volume 1697 of Lecture Notes in Computer Science, pages 43-50. [ bib | .pdf ]
[4] Ralf H. Reussner. Advances in SKaMPI. Http://liinwww.ira.uka.de/ reussner/skampi-advances.ps, 1999. [ bib ]
[5] Ralf H. Reussner. SKaLib: SKaMPI as a library - Technical Reference Manual. Technical Report 07/99, University of Karlsruhe, 1999. [ bib | .pdf ]
[6] Ralf H. Reussner. SKaMPI: The Special Karlsruher MPI-Benchmark-User Manual. Technical Report 02/99, University of Karlsruhe, 1999. [ bib | .pdf ]
[7] Ralf H. Reussner. Dynamic Types for Software Components. In Companion of the Conference on Object-Oriented Programming Systems, Languages, and Applications (OOPSLA '99), 1999. Extended abstract. [ bib | .pdf ]
[8] Ralf H. Reussner and Dirk Heuzeroth. A Meta-Protocol and Type system for the Dynamic Coupling of Binary Components. In Proceedings of the OOPSLA'99 Workshop on Object Oriented Reflection and Software Engineering, 1999. [ bib | .pdf | Abstract ]
1We introduce a new type system, where the type of a component consists of two protocols � a call and a use protocol. We model these protocols by finite automata and show how those reflect component enhancement and adaption. Coupling is controlled by a meta-protocol, which calls the adaption and enhancement algorithms. These algorithms require type information of the components involved. This type information is provided by the meta-protocol using reflection. This mechanism allows automatic adaption of components in changing environments.

 TOP

Publications 1998

[1] Antje von Knethen, Erik Kamsties, Ralf H. Reussner, Christian Bunse, and Bin Shen. A Comparative Case Study with Industrial Requirements Engineering Methods. In Proceedings of the 11th International Conference on Software Engineering and its Applications, 1998. [ bib | .pdf | Abstract ]
Numerous requirements engineering methods have been proposed to improve the quality of requirements documents as well as the developed software and to increase customer satisfaction with the final product. In this paper, we report on an explorative case study in the area of reactive systems with eight requirements engineering methods from a wide spectrum, namely, BSM, OCTOPUS, ROOM, SA/RT, SCR, SDL, UML (with OMT process), and Z. Our major finding is that the structuring mechanisms provided by a requirements engineering method, like hierarchies (e.g., ROOM) or views (e.g., UML), are directly related (1) to the number of problems found in the informal requirements during the creation of a requirements specification as well as (2) to the understandability of the final requirements specification.
[2] Ralf H. Reussner, Peter Sanders, Lutz Prechelt, and Matthias Müller. SKaMPI: A Detailed, Accurate MPI Benchmark. In Recent advances in parallel virtual machine and message passing interface: 5th European PVM/MPI Users' Group Meeting, Liverpool, UK, September 7-9, 1998, V. Alexandrov and J. J. Dongarra, editors, 1998, volume 1497 of Lecture Notes in Computer Science, pages 52-59. Springer-Verlag Berlin Heidelberg. 1998. [ bib | .pdf ]

 TOP

Publications 1997

[1] Ralf H. Reussner. Portable Leistungsmessung des Message Passing Interfaces. Masters thesis, Department of Informatics, University of Karlsruhe, 1997. [ bib | .pdf ]

 TOP

Publications 1996

[1] Ralf H. Reussner. Über die Simulation der Lotka-Volterra-Differentialgleichungen durch einen Zellularautomaten, 1996. [ bib | .pdf ]

 TOP

Publications 1995

[1] Dieter Bär, Michael Schmidt, Ingo Redeke, Thomas Brückner, Ulrich Veigel, Ralf H. Reussner, and Lutz Prechelt. Experimentelle Methoden in der Informatik. Technical Report 38/1995, Department of Informatics, Universität Karlsruhe, 1995. [ bib | Abstract ]
Dieser Report enthaelt die Ausarbeitungen von Vortraegen aus einem Seminar gleichen Namens, das am 3./4. Juli 1995 am Institut f"ur Programmstrukturen und Datenorganisation unter Leitung von Walter Tichy, Ernst Heinz, Paul Lukowicz und Lutz Prechelt stattfand. Die Artikel geben einen Ueberblick ueber die moegliche Funktion und den Stellenwert experimentellen Vorgehens in verschiedenen Teilen der Informatik, sowie einerseits deren wissenschaftstheoretische Grundlage und andererseits ihre bisherige praktische Umsetzung.

 TOP