Home | Sitemap | Index | Contact | Legals | KIT

Publications of Klaus Krogmann

Books/Book Chapters and edited Proceedings

[1] Ralf H. Reussner, Steffen Becker, Jens Happe, Robert Heinrich, Anne Koziolek, Heiko Koziolek, Max Kramer, and Klaus Krogmann. Modeling and Simulating Software Architectures - The Palladio Approach. MIT Press, Cambridge, MA, October 2016. [ bib | http | Abstract ]
Too often, software designers lack an understanding of the effect of design decisions on such quality attributes as performance and reliability. This necessitates costly trial-and-error testing cycles, delaying or complicating rollout. This book presents a new, quantitative architecture simulation approach to software design, which allows software engineers to model quality of service in early design stages. It presents the first simulator for software architectures, Palladio, and shows students and professionals how to model reusable, parametrized components and configured, deployed systems in order to analyze service attributes. The text details the key concepts of Palladio's domain-specific modeling language for software architecture quality and presents the corresponding development stage. It describes how quality information can be used to calibrate architecture models from which detailed simulation models are automatically derived for quality predictions. Readers will learn how to approach systematically questions about scalability, hardware resources, and efficiency. The text features a running example to illustrate tasks and methods as well as three case studies from industry. Each chapter ends with exercises, suggestions for further reading, and "takeaways" that summarize the key points of the chapter. The simulator can be downloaded from a companion website, which offers additional material. The book can be used in graduate courses on software architecture, quality engineering, or performance engineering. It will also be an essential resource for software architects and software engineers and for practitioners who want to apply Palladio in industrial settings.
[2] Lorenzo Blasi, Gunnar Brataas, Michael Boniface, Joe Butler, Francesco D'andria, Michel Drescher, Ricardo Jimenez, Klaus Krogmann, George Kousiouris, Bastian Koller, Giada Landi, Francesco Matera, Andreas Menychtas, Karsten Oberle, Stephen Phillips, Luca Rea, Paolo Romano, Michael Symonds, and Wolfgang Ziegler. Cloud Computing Service Level Agreements - Exploitation of Research Results. European Commission Directorate General Communications Networks, Content and Technology, June 2013. [ bib | http | Abstract ]
The rapid evolution of the cloud market is leading to the emergence of new services, new ways for service provisioning and new interaction and collaboration models both amongst cloud providers and service ecosystems exploiting cloud resources. Service Level Agreements (SLAs) govern the aforementioned relationships by defining the terms of engagement for the participating entities.
[3] Klaus Krogmann, Markus Kremer, Ralf Knobloch, Siegfried Florek, and Rainer Schmidt. Serviceorientierte Architekturen in der Cloud - Leitfaden und Nachschlagewerk, chapter Technische Konzepte, pages 64-77. BITKOM, version 1.0 edition, 2013. [ bib | http | .pdf | Abstract ]
Das Kapitel beleuchtet wichtige technische Konzepte, die für das Cloud-Computing essentiell sind. Das Verstaendnis für die durch diese Konzepte geloesten Fragestellungen ist notwendig, um die verschiedenen Auspraegungen und Varianten von Cloud-Angeboten beurteilen zu koennen.
[4] Klaus Krogmann. Reconstruction of Software Component Architectures and Behaviour Models using Static and Dynamic Analysis, volume 4 of The Karlsruhe Series on Software Design and Quality. KIT Scientific Publishing, 2012. [ bib | DOI | http | Abstract ]
Model-based performance prediction systematically deals with the evaluation of software performance to avoid for example bottlenecks, estimate execution environment sizing, or identify scalability limitations for new usage scenarios. Such performance predictions require up-to-date software performance models. This book describes a new integrated reverse engineering approach for the reconstruction of parameterised software performance models (software component architecture and behaviour).
[5] Achim Baier, Steffen Becker, Martin Jung, Klaus Krogmann, Carsten Röttgers, Niels Streekmann, Karsten Thoms, and Steffen Zschaler. Handbuch der Software-Architektur, chapter Modellgetriebene Software-Entwicklung, pages 93-122. dPunkt.verlag Heidelberg, 2 edition, December 2008. [ bib ]
[6] Sebastian Herold, Holger Klus, Yannick Welsch, Constanze Deiters, Andreas Rausch, Ralf Reussner, Klaus Krogmann, Heiko Koziolek, Raffaela Mirandola, Benjamin Hummel, Michael Meisinger, and Christian Pfaller. The Common Component Modeling Example, volume 5153 of Lecture Notes in Computer Science, chapter CoCoME - The Common Component Modeling Example, pages 16-53. Springer-Verlag Berlin Heidelberg, 2008. [ bib | http | Abstract ]
The example of use which was chosen as the Common Component Modeling Example (CoCoME) and on which the several methods presented in this book should be applied was designed according to the example described by Larman in [1]. The description of this example and its use cases in the current chapter shall be considered under the assumption that this information was delivered by a business company as it could be in the reality. Therefore the specified requirements are potentially incomplete or imprecise.
[7] Klaus Krogmann and Ralf H. Reussner. The Common Component Modeling Example, volume 5153 of Lecture Notes in Computer Science, chapter Palladio: Prediction of Performance Properties, pages 297-326. Springer-Verlag Berlin Heidelberg, 2008. [ bib | http | Abstract ]
Palladio is a component modelling approach with a focus on performance (i.e. response time, throughput, resource utilisation) analysis to enable early design-time evaluation of software architectures. It targets modelling business information systems. The Palladio approach includes a meta-model called Palladio Component Model for structural views, component behaviour specifications, resource environment, component allocation and the modelling of system usage and multiple analysis techniques ranging from process algebra analysis to discrete event simulation. Additionally, the Palladio approach is aligned with a development process model tailored for component-based software systems. Early design-time predictions avoid costly redesigns and reimplementation. Palladio enables software architects to analyse different architectural design alternatives supporting their design decisions with quantitative performance predictions, provided with the Palladio approach.

Refereed journal articles

[1] Klaus Krogmann, Matthias Naab, and Oliver Hummel. Agile Anti-Patterns - Warum viele Organisationen weniger agil sind, als sie denken. Entwickler Magazin Spezial: Agilität, 3, March 2015. re-published article. [ bib | .html ]
[2] Klaus Krogmann, Matthias Naab, and Oliver Hummel. Agile Anti-Patterns - Warum viele Organisationen weniger agil sind, als sie denken. JAXenter Business Technology, 2.14:29-34, June 2014. [ bib | http | Abstract ]
Mit der zunehmenden Verbreitung agiler Softwareentwicklung steigt auch die Zahl problematischer Projekte. Ziele wie eine schnelle Reaktionsfähigkeit auf Änderungswünsche werden nicht erreicht, obwohl (vordergründig) nach agilen Grundsätzen vorgegangen wird. Im Artikel fassen wir wiederkehrende Praxiserlebnisse in Form von Anti-Patterns zusammen und schildern, wie agile Entwicklung wiederholt zu dogmatisch gelebt oder als Ausrede für schlechte Projektorganisation missbraucht wurde. Diese Anti-Patterns ermöglichen dem Leser eigene Projekte auf ähnliche Missstände zu prüfen und gegebenenfalls dagegen anzugehen.
[3] Martin Küster and Klaus Krogmann. Checkable Code Decisions to Support Software Evolution. Softwaretechnik-Trends, 34(2):58-59, May 2014. [ bib | .pdf | Abstract ]
For the evolution of software, understanding of the context, i.e. history and rationale of the existing artifacts, is crucial to avoid ignorant surgery", i.e. modifications to the software without understanding its design intent. Existing works on recording architecture decisions have mostly focused on architectural models. We extend this to code models, and introduce a catalog of code decisions that can be found in object-oriented systems. With the presented approach, we make it possible to record design decisions that are concerned with the decomposition of the system into interfaces, classes, and references between them, or how exceptions are handled. Furthermore, we indicate how decisions on the usage of Java frameworks (e.g. for dependency injection) can be recorded. All decision types presented are supplied with OCL-constraints to check the validity of the decision based on the linked code model.
[4] Benjamin Klatt, Klaus Krogmann, and Volker Kuttruff. Developing Stop Word Lists for Natural Language Program Analysis. Softwaretechnik-Trends, 34(2):85-86, May 2014. [ bib | .pdf | Abstract ]
When implementing a software, developers express conceptual knowledge (e.g. about a specific feature) not only in program language syntax and semantics but also in linguistic information stored in identifiers (e.g. method or class names). Based on this habit, Natural Language Program Analysis (NLPA) is used to improve many different areas in software engineering such as code recommendations or program analysis. Simplified, NLPA algorithms collect identifier names and apply term processing such as camel case splitting (i.e. "MyIdentifier" to "My" and "Identifier") or stemming (i.e. "records" to "record") to subsequently perform further analyzes. In our research context, we search for code locations sharing similar terms to link them with each other. In such types of analysis, filtering stop words is essential to reduce the number of useless links.
[5] Benjamin Klatt, Klaus Krogmann, and Christian Wende. Consolidating Customized Product Copies to Software Product Lines. Softwaretechnik-Trends, 34(2):64-65, May 2014. [ bib | .pdf | Abstract ]
Reusing existing software solutions as initial point for new projects is a frequent approach in software business. Copying existing code and adapting it to customer-specific needs allows for exible and efficient software customization in the short term. But in the long term, a Software Product Line (SPL) approach with a single code base and explicitly managed variability reduces maintenance effort and eases instantiation of new products.
[6] Benjamin Klatt, Martin Küster, Klaus Krogmann, and Oliver Burkhardt. A Change Impact Analysis Case Study: Replacing the Input Data Model of SoMoX. Softwaretechnik-Trends, 33(2):53-54, May 2013, Köllen Druck & Verlag GmbH. [ bib | .pdf | Abstract ]
Change impact analysis aims to provide insights about efforts and effects of a change to be expected, and to prevent missed adaptations. However, the benefit of applying an analysis in a given scenario is not clear. Only a few studies about change impact analysis ap- proaches compare the actual effort spent implement- ing the change with the prediction of the analysis. To gain more insight about change impact analysis benefits, we have performed a case study on chang- ing a software's input data model. We have applied two analyses, using the Java compiler and a depen- dency graph based approach, before implementing the actual change. In this paper, we present the re- sults, showing that i) syntactically required changes have been predicted adequately, iii) changes required for semantical correctness required the major effort but were not predicted at all, and iii) tool support for change impact analysis still needs to be improved.
[7] Benjamin Klatt and Klaus Krogmann. Model-Driven Product Consolidation into Software Product Lines. Softwaretechnik-Trends, 32(2):13-14, March 2012, Köllen Druck & Verlag GmbH, Bamberg, Germany. [ bib | http | .pdf ]
[8] Benjamin Klatt and Klaus Krogmann. Towards Tool-Support for Evolutionary Software Product Line Development. Softwaretechnik-Trends, 31(2):38-39, May 2011, Köllen Druck & Verlag GmbH, Bad-Honnef, Germany. [ bib | .pdf | Abstract ]
Software vendors often need to vary their products to satisfy customer-specific requirements. In many cases, existing code is reused and adapted to the new project needs. This copy&paste course of action leads a multiproduct code-base that is hard to maintain. Software Product Lines (SPL) emerged as an appropriate concept to manage product families with common functionality and code bases. Evolutionary SPLs, with a product-first-approach and an exposed product line, provide advantages such as a reduced time-to-market and SPLs based on evaluated and proven products.
[9] Klaus Krogmann, Michael Kuperberg, and Ralf Reussner. Using Genetic Search for Reverse Engineering of Parametric Behaviour Models for Performance Prediction. IEEE Transactions on Software Engineering, 36(6):865-877, 2010, IEEE. [ bib | DOI | .pdf | Abstract ]
In component-based software engineering, existing components are often re-used in new applications. Correspondingly, the response time of an entire component-based application can be predicted from the execution durations of individual component services. These execution durations depend on the runtime behaviour of a component, which itself is influenced by three factors: the execution platform, the usage profile, and the component wiring. To cover all relevant combinations of these influencing factors, conventional prediction of response times requires repeated deployment and measurements of component services for all such combinations, incurring a substantial effort. This paper presents a novel comprehensive approach for reverse engineering and performance prediction of components. In it, genetic programming is utilised for reconstructing a behaviour model from monitoring data, runtime bytecode counts and static bytecode analysis. The resulting behaviour model is parametrised over all three performance-influencing factors, which are specified separately. This results in significantly fewer measurements: the behaviour model is reconstructed only once per component service, and one application-independent bytecode benchmark run is sufficient to characterise an execution platform. To predict the execution durations for a concrete platform, our approach combines the behaviour model with platform-specific benchmarking results. We validate our approach by predicting the performance of a file sharing application.
[10] Klaus Krogmann and Ralf Reussner. Reverse Engineering von Software-Komponentenverhalten mittels Genetischer Programmierung. Softwaretechnik-Trends, 29(2):22-24, May 2009. [ bib | .pdf | Abstract ]
Die Verwendung von Komponenten ist ein anerkanntes Prinzip in der Software-Entwicklung. Dabei werden Software-Komponenten zumeist als Black-Boxes aufgefasst, deren Interna vor einem Komponenten-Verwender verborgen sind. Architektur-Analyse- Verfahren zur Vorhersage nicht-funktionaler Eigenschaften erlauben bspw. auf der Architekturebene Dimensionierungsfragestellungen fuer Hardware- / Software-Umgebungen zu beantworten, sowie Skalierbarkeitsanalysen und Was-Waere-Wenn-Szenarien fuer die Erweiterung von Altsystemen durchzufuehren. Dazu benoetigen sie jedoch Informationen ueber Interna (bspw. die Anzahl abgearbeiteter Schleifen oder Aufrufe externer Dienste) von Komponenten. Um an solche Informationen zu gelangen muessen existierende Software-Komponenten analysiert werden. Die benoetigten Informationen ueber das Innere der Komponenten muessen dabei derart rekonstruiert werden, dass sie fuer anschließende Analyseverfahren nicht-funktionaler Eigenschaften genutzt werden koennen. Eine haendische Rekonstruktion solcher Modelle scheitert haeufig an der Groeße der Systeme und ist sehr fehleranfaellig, da konsistente Abstraktionen ueber potentiell tausende Zeilen von Code gefunden werden muessen. Bestehende Verfahren liefern dabei nicht die notwendigen Daten- und Kontrollflussabstraktionen die fuer Analysen und Simulationen benoetigt werden. Der Beitrag dieses Papiers ist ein Reverse Engineering Verfahren fuer Komponentenverhalten. Die daraus resultierenden Modelle (Palladio Komponentenmodell) eignen sich zur Vorhersage von Performanz-Eigenschaften (Antwortzeit, Durchsatz) und damit fuer die oben angefuehrten Fragestellungen. Die aus Quellcode rekonstruierten Modelle umfassen parametrisierten Kontroll- und Datenfluss fuer Software-Komponenten und stellen eine Abstraktion realer Zusammenhänge im Quellcode dar. Das Reverse Engineering Verfahren kombiniert dabei ueber Genetische Programmierung (einer Form von Maschinen Lernen) statische und dynamische Analyseverfahren.
[11] Klaus Krogmann. Reengineering von Software-Komponenten zur Vorhersage von Dienstgüte-Eigenschaften. Softwaretechnik-Trends, 27(2):44-45, May 2007. [ bib | .pdf | Abstract ]
Die Verwendung von Komponenten ist ein anerkanntes Prinzip in der Software-Entwicklung. Dabei werden Software-Komponenten zumeist als Black-Boxes aufgefasst , deren Interna vor einem Komponenten-Verwender verborgen sind. Zahlreiche Architektur-Analyse-Verfahren, insbesondere solche zur Vorhersage von nicht-funktionalen Eigenschaften, benötigen jedoch Informationen über Interna (bspw. die Anzahl abgearbeiteter Schleifen oder Aufrufe externer Dienste), die von den vielen Komponentenmodellen nicht angeboten werden. Für Forscher, die aktuell mit der Analyse nicht-funktionaler Eigenschaften von komponentenbasierten Software-Architekturen beschäftigt sind, stellt sich die Frage, wie sie an dieses Wissen über Komponenten-Interna gelangen. Dabei müssen existierende Software-Komponenten analysiert werden, um die benötigten Informationen über das Innere der Komponenten derart zu rekonstruieren, dass sie für anschlie"sende Analyse-Verfahren nicht-funktionaler Eigenschaften genutzt werden können. Bestehende Verfahren konzentrieren sich auf die Erkennung von Komponenten oder bspw. das Reengineering von Sequenzdiagrammen gegebener Komponenten, fokussieren aber nicht auf die Informationen, die von Vorhersageverfahren für nicht-funktionale Eigenschaften benötigt werden. Der Beitrag dieses Papiers ist eine genaue Betrachtung der Informationen, die das Reengineering von Komponenten-Interna liefern muss, um für die Vorhersage der nicht-funktionalen Eigenschaft Performanz (im Sinne von Antwortzeit) nutzbringend zu sein. Dazu wird das Palladio Komponentenmodell [?] vorgestellt, das genau für diese Informationen vorbereitet ist. Schließlich wird ein Reengineering-Ansatz vorgestellt, der dazu geeignet ist, die benötigten Informationen zu gewinnen.

Refereed conference/Workshop papers

[1] Michael Langhammer and Klaus Krogmann. A co-evolution approach for source code and component-based architecture models. In 17. Workshop Software-Reengineering und-Evolution, 2015, volume 4. [ bib | http ]
[2] Benjamin Klatt, Klaus Krogmann, and Christoph Seidl. Program Dependency Analysis for Consolidating Customized Product Copies. In IEEE 30th International Conference on Software Maintenance and Evolution (ICSME'14), September 2014, pages 496-500. Victoria, Canada. [ bib | DOI | Abstract ]
To cope with project constraints, copying and customizing existing software products is a typical practice to flexibly serve customer-specific needs. In the long term, this practice becomes a limitation for growth due to redundant maintenance efforts or wasted synergy and cross selling potentials. To mitigate this limitation, customized copies need to be consolidated into a single, variable code base of a software product line (SPL). However, consolidation is tedious as one must identify and correlate differences between the copies to design future variability. For one, existing consolidation approaches lack support of the implementation level. In addition, approaches in the fields of difference analysis and feature detection are not sufficiently integrated for finding relationships between code modifications. In this paper, we present remedy to this problem by integrating a difference analysis with a program dependency analysis based on Program Dependency Graphs (PDG) to reduce the effort of consolidating developers when identifying dependent differences and deriving clusters to consider in their variability design. We successfully evaluated our approach on variants of the open source ArgoUML modeling tool, reducing the manual review effort about 72% with a precision of 99% and a recall of 80%. We further proved its industrial applicability in a case study on a commercial relationship management application.
[3] Benjamin Klatt, Klaus Krogmann, and Christian Wende. Consolidating Customized Product Copies to Software Product Lines. In 16th Workshop Software-Reengineering (WSRE'14), April 2014. Bad Honnef, Germany. [ bib | .pdf ]
[4] Benjamin Klatt, Klaus Krogmann, and Volker Kuttruff. Developing Stop Word Lists for Natural Language Program Analysis. In 16th Workshop Software-Reengineering (WSRE'14), April 2014. Bad Honnef, Germany. [ bib | .pdf ]
[5] Benjamin Klatt, Klaus Krogmann, and Michael Langhammer. Individual Code-Analyzes in Practice. In Proceedings of Software Engineering 2014 (SE2014), Nils Christian Ehmke Wilhelm Hasselbring, editor, January 2014, volume P-227 of Lecture Notes in Informatics (LNI). Kiel, Germany. [ bib | .pdf ]
[6] P-O Östberg, Henning Groenda, Stefan Wesner, James Byrne, Dimitrios S. Nikolopoulos, Craig Sheridan, Jakub Krzywda, Ahmed Ali-Eldin, Johan Tordsson, Erik Elmroth, Christian Stier, Klaus Krogmann, Jörg Domaschka, Christopher Hauser, PJ Byrne, Sergej Svorobej, Barry McCollum, Zafeirios Papazachos, Loke Johannessen, Stephan Rüth, and Dragana Paurevic. The CACTOS Vision of Context-Aware Cloud Topology Optimization and Simulation. In Proceedings of the Sixth IEEE International Conference on Cloud Computing Technology and Science (CloudCom), 2014, pages 26-31. IEEE Computer Society, Singapore. 2014. [ bib | DOI ]
[7] Benjamin Klatt, Martin Küster, Klaus Krogmann, and Oliver Burkhardt. A Change Impact Analysis Case Study: Replacing the Input Data Model of SoMoX. In 15th Workshop Software-Reengineering (WSR'13), May 2013. Bad Honnef, Germany. [ bib | .pdf ]
[8] Benjamin Klatt, Martin Küster, and Klaus Krogmann. A Graph-Based Analysis Concept to Derive a Variation Point Design from Product Copies. In Proceedings of the 1st International workshop on Reverse Variability Engineering (REVE'13), March 2013, pages 1-8. Genua, Italy. [ bib | .pdf ]
[9] Christoph Rathfelder, Stefan Becker, Klaus Krogmann, and Ralf Reussner. Workload-aware system monitoring using performance predictions applied to a large-scale e-mail system. In Proceedings of the Joint 10th Working IEEE/IFIP Conference on Software Architecture (WICSA) & 6th European Conference on Software Architecture (ECSA), Helsinki, Finland, August 2012, pages 31-40. IEEE Computer Society, Washington, DC, USA. August 2012, Acceptance Rate (Full Paper): 19.8%. [ bib | DOI | http | .pdf ]
[10] Benjamin Klatt, Zoya Durdik, Klaus Krogmann, Heiko Koziolek, Johannes Stammel, and Roland Weiss. Identify Impacts of Evolving Third Party Components on Long-Living Software Systems. In Proceedings of the 16th Conference on Software Maintenance and Reengineering (CSMR'12), March 2012, pages 461-464. Szeged, Hungary. [ bib | DOI | .pdf | Abstract ]
Integrating 3rd party components in software systems provides promising advantages but also risks due to disconnected evolution cycles. Deciding whether to migrate to a newer version of a 3rd party component integrated into self-implemented code or to switch to a different one is challenging. Dedicated evolution support for 3rd party component scenarios is hence required. Existing approaches do not account for open source components which allow accessing and analyzing their source code and project information. The approach presented in this paper combines analyses for code dependency, code quality, and bug tracker information for a holistic view on the evolution with 3rd party components. We applied the approach in a case study on a communication middleware component for industrial devices used at ABB. We identified 7 methods potentially impacted by changes of 3rd party components despite the absence of interface changes. We further identified self-implemented code that does not need any manual investigation after the 3rd party component evolution as well as a positive trend of code and bug tracker issues.
[11] Zoya Durdik, Benjamin Klatt, Heiko Koziolek, Klaus Krogmann, Johannes Stammel, and Roland Weiss. Sustainability guidelines for long-living software systems. In Proceedings of the 28th IEEE International Conference on Software Maintenance (ICSM), 2012. Trento, Italy. [ bib | http ]
[12] Benjamin Klatt and Klaus Krogmann. Towards Tool-Support for Evolutionary Software Product Line Development. In 13th Workshop Software-Reengineering (WSR 2011), May 02-04 2011. Bad-Honnef, Germany. [ bib | .pdf ]
[13] Heiko Koziolek, Bastian Schlich, Carlos Bilich, Roland Weiss, Steffen Becker, Klaus Krogmann, Mircea Trifu, Raffaela Mirandola, and Anne Koziolek. An industrial case study on quality impact prediction for evolving service-oriented software. In Proceeding of the 33rd international conference on Software engineering (ICSE 2011), Software Engineering in Practice Track, Richard N. Taylor, Harald Gall, and Nenad Medvidovic, editors, Waikiki, Honolulu, HI, USA, 2011, pages 776-785. ACM, New York, NY, USA. 2011, Acceptance Rate: 18% (18/100). [ bib | DOI | http | Abstract ]
Systematic decision support for architectural design decisions is a major concern for software architects of evolving service-oriented systems. In practice, architects often analyse the expected performance and reliability of design alternatives based on prototypes or former experience. Modeldriven prediction methods claim to uncover the tradeoffs between different alternatives quantitatively while being more cost-effective and less error-prone. However, they often suffer from weak tool support and focus on single quality attributes. Furthermore, there is limited evidence on their effectiveness based on documented industrial case studies. Thus, we have applied a novel, model-driven prediction method called Q-ImPrESS on a large-scale process control system consisting of several million lines of code from the automation domain to evaluate its evolution scenarios. This paper reports our experiences with the method and lessons learned. Benefits of Q-ImPrESS are the good architectural decision support and comprehensive tool framework, while one drawback is the time-consuming data collection.
[14] Heiko Koziolek, Roland Weiss, Zoya Durdik, Johannes Stammel, and Klaus Krogmann. Towards Software Sustainability Guidelines for Long-living Industrial Systems. In Proceedings of Software Engineering (Workshops), 3rd Workshop of GI Working Group Long-living Software Systems (L2S2), Design for Future, 2011, volume 184 of LNI, pages 47-58. GI. 2011. [ bib | .pdf | Abstract ]
Long-living software systems are sustainable if they can be cost-effectively maintained and evolved over their complete life-cycle. Software-intensive systems in the industrial automation domain are typically long-living and cause high evolution costs, because of new customer requirements, technology changes, and failure reports. Many methods for sustainable software development have been proposed in the scientific literature, but most of them are not applied in industrial practice. We identified typical evolution scenarios in the industrial automation domain and conducted an extensive literature search to extract a number of guidelines for sustainable software development based on the methods found in literature. For validation purposes, we map one evolution scenario to these guidelines in this paper.
[15] Steffen Becker, Michael Hauck, Mircea Trifu, Klaus Krogmann, and Jan Kofron. Reverse Engineering Component Models for Quality Predictions. In Proceedings of the 14th European Conference on Software Maintenance and Reengineering, European Projects Track, 2010, pages 199-202. IEEE. 2010. [ bib | .pdf | Abstract ]
Legacy applications are still widely spread. If a need to change deployment or update its functionality arises, it becomes difficult to estimate the performance impact of such modifications due to absence of corresponding models. In this paper, we present an extendable integrated environment based on Eclipse developed in the scope of the Q-ImPrESS project for reverse engineering of legacy applications (in C/C++/Java). The Q-ImPrESS project aims at modeling quality attributes at an architectural level and allows for choosing the most suitable variant for implementation of a desired modification. The main contributions of the project include i) a high integration of all steps of the entire process into a single tool, a beta version of which has been already successfully tested on a case study, ii) integration of multiple research approaches to performance modeling, and iii) an extendable underlying meta-model for different quality dimensions.
[16] Frank Eichinger, Klaus Krogmann, Roland Klug, and Klemens Böhm. Software-Defect Localisation by Mining Dataflow-Enabled Call Graphs. In Proceedings of the 10th European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD), 2010. Barcelona, Spain. [ bib | http | Abstract ]
Defect localisation is essential in software engineering and is an important task in domain-specific data mining. Existing techniques building on call-graph mining can localise different kinds of defects. However, these techniques focus on defects that affect the control flow and are agnostic regarding the data flow. In this paper, we introduce data flow enabled call graphs that incorporate abstractions of the data flow. Building on these graphs, we present an approach for defect localisation. The creation of the graphs and the defect localisation are essentially data mining problems, making use of discretisation, frequent subgraph mining and feature selection. We demonstrate the defect-localisation qualities of our approach with a study on defects introduced into Weka. As a result, defect localisation now works much better, and a developer has to investigate on average only 1.5 out of 30 methods to fix a defect.
[17] Fabian Brosig, Samuel Kounev, and Klaus Krogmann. Automated Extraction of Palladio Component Models from Running Enterprise Java Applications. In Proceedings of the 1st International Workshop on Run-time mOdels for Self-managing Systems and Applications (ROSSA 2009). In conjunction with the Fourth International Conference on Performance Evaluation Methodologies and Tools (VALUETOOLS 2009), Pisa, Italy, 2009, pages 10:1-10:10. ACM, New York, NY, USA. 2009. [ bib | .pdf | Abstract ]
Nowadays, software systems have to fulfill increasingly stringent requirements for performance and scalability. To ensure that a system meets its performance requirements during operation, the ability to predict its performance under different configurations and workloads is essential. Most performance analysis tools currently used in industry focus on monitoring the current system state. They provide low-level monitoring data without any performance prediction capabilities. For performance prediction, performance models are normally required. However, building predictive performance models manually requires a lot of time and effort. In this paper, we present a method for automated extraction of performance models of Java EE applications, based on monitoring data collected during operation. We extract instances of the Palladio Component Model (PCM) - a performance meta-model targeted at component-based systems. We evaluate the model extraction method in the context of a case study with a real-world enterprise application. Even though the extraction requires some manual intervention, the case study demonstrates that the existing gap between low-level monitoring data and high-level performance models can be closed.
[18] Michael Hauck, Michael Kuperberg, Klaus Krogmann, and Ralf Reussner. Modelling Layered Component Execution Environments for Performance Prediction. In Proceedings of the 12th International Symposium on Component Based Software Engineering (CBSE 2009), 2009, number 5582 in LNCS, pages 191-208. Springer. 2009. [ bib | DOI | .html | .pdf | Abstract ]
Software architects often use model-based techniques to analyse performance (e.g. response times), reliability and other extra-functional properties of software systems. These techniques operate on models of software architecture and execution environment, and are applied at design time for early evaluation of design alternatives, especially to avoid implementing systems with insufficient quality. Virtualisation (such as operating system hypervisors or virtual machines) and multiple layers in execution environments (e.g. RAID disk array controllers on top of hard disks) are becoming increasingly popular in reality and need to be reflected in the models of execution environments. However, current component meta-models do not support virtualisation and cannot model individual layers of execution environments. This means that the entire monolithic model must be recreated when different implementations of a layer must be compared to make a design decision, e.g. when comparing different Java Virtual Machines. In this paper, we present an extension of an established model-based performance prediction approach and associated tools which allow to model and predict state-of-the-art layered execution environments, such as disk arrays, virtual machines, and application servers. The evaluation of the presented approach shows its applicability and the resulting accuracy of the performance prediction while respecting the structure of the modelled resource environment.
[19] Klaus Krogmann, Christian M. Schweda, Sabine Buckl, Michael Kuperberg, Anne Martens, and Florian Matthes. Improved Feedback for Architectural Performance Prediction using Software Cartography Visualizations. In Architectures for Adaptive Systems (Proceedings of QoSA 2009), Raffaela Mirandola, Ian Gorton, and Christine Hofmeister, editors, 2009, volume 5581 of Lecture Notes in Computer Science, pages 52-69. Springer. 2009, Best Paper Award. [ bib | DOI | http | Abstract ]
Software performance engineering provides techniques to analyze and predict the performance (e.g., response time or resource utilization) of software systems to avoid implementations with insufficient performance. These techniques operate on models of software, often at an architectural level, to enable early, design-time predictions for evaluating design alternatives. Current software performance engineering approaches allow the prediction of performance at design time, but often provide cryptic results (e.g., lengths of queues). These prediction results can be hardly mapped back to the software architecture by humans, making it hard to derive the right design decisions. In this paper, we integrate software cartography (a map technique) with software performance engineering to overcome the limited interpretability of raw performance prediction results. Our approach is based on model transformations and a general software visualization approach. It provides an intuitive mapping of prediction results to the software architecture which simplifies design decisions. We successfully evaluated our approach in a quasi experiment involving 41 participants by comparing the correctness of performance-improving design decisions and participants' time effort using our novel approach to an existing software performance visualization.
[20] Michael Kuperberg, Klaus Krogmann, and Ralf Reussner. Performance Prediction for Black-Box Components using Reengineered Parametric Behaviour Models. In Proceedings of the 11th International Symposium on Component Based Software Engineering (CBSE 2008), Karlsruhe, Germany, 14th-17th October 2008, October 2008, volume 5282 of Lecture Notes in Computer Science, pages 48-63. Springer-Verlag Berlin Heidelberg. October 2008. [ bib | .pdf | Abstract ]
In component-based software engineering, the response time of an entire application is often predicted from the execution durations of individual component services. However, these execution durations are specific for an execution platform (i.e. its resources such as CPU) and for a usage profile. Reusing an existing component on different execution platforms up to now required repeated measurements of the concerned components for each relevant combination of execution platform and usage profile, leading to high effort. This paper presents a novel integrated approach that overcomes these limitations by reconstructing behaviour models with platform-independent resource demands of bytecode components. The reconstructed models are parameterised over input parameter values. Using platform-specific results of bytecode benchmarking, our approach is able to translate the platform-independent resource demands into predictions for execution durations on a certain platform. We validate our approach by predicting the performance of a file sharing application.
[21] Klaus Krogmann, Michael Kuperberg, and Ralf Reussner. Reverse Engineering of Parametric Behavioural Service Performance Models from Black-Box Components. In MDD, SOA und IT-Management (MSI 2008), Ulrike Steffens, Jan Stefan Addicks, and Niels Streekmann, editors, September 2008, pages 57-71. GITO Verlag, Oldenburg. September 2008. [ bib | .pdf | Abstract ]
Integrating heterogeneous software systems becomes increasingly important. It requires combining existing components to form new applications. Such new applications are required to satisfy non-functional properties, such as performance. Design-time performance prediction of new applications built from existing components helps to compare design decisions before actually implementing them to the full, avoiding costly prototype and glue code creation. But design-time performance prediction requires understanding and modeling of data flow and control flow accross component boundaries, which is not given for most black-box components. If, for example one component processes and forwards files to other components, this effect should be an explicit model parameter to correctly capture its performance impact. This impact should also be parameterised over data, but no reverse engineering approach exists to recover such dependencies. In this paper, we present an approach that allows reverse engineering of such behavioural models, which is applicable for blackbox components. By runtime monitoring and application of genetic programming, we recover functional dependencies in code, which then are expressed as parameterisation in the output model. We successfully validated our approach in a case study on a file sharing application, showing that all dependencies could correctly be reverse engineered from black-box components.
[22] Landry Chouambe, Benjamin Klatt, and Klaus Krogmann. Reverse Engineering Software-Models of Component-Based Systems. In 12th European Conference on Software Maintenance and Reengineering, Kostas Kontogiannis, Christos Tjortjis, and Andreas Winter, editors, April 1-4 2008, pages 93-102. IEEE Computer Society, Athens, Greece. April 1-4 2008. [ bib | .pdf | Abstract ]
An increasing number of software systems is developed using component technologies such as COM, CORBA, or EJB. Still, there is a lack of support to reverse engineer such systems. Existing approaches claim reverse engineering of components, but do not support composite components. Also, external dependencies such as required interfaces are not made explicit. Furthermore, relaxed component definitions are used, and obtained components are thus indistinguishable from modules or classes. We present an iterative reverse engineering approach that follows the widely used definition of components by Szyperski. It enables third-party reuse of components by explicitly stating their interfaces and supports composition of components. Additionally, components that are reverse engineered with the approach allow reasoning on properties of software architectures at the model level. For the approach, source code metrics are combined to recognize components. We discuss the selection of source code metrics and their interdependencies, which were explicitly taken into account. An implementation of the approach was successfully validated within four case studies. Additionally, a fifth case study shows the scalability of the approach for an industrial-size system.
[23] Thomas Kappler, Heiko Koziolek, Klaus Krogmann, and Ralf H. Reussner. Towards Automatic Construction of Reusable Prediction Models for Component-Based Performance Engineering. In Software Engineering 2008, February 18-22 2008, volume 121 of Lecture Notes in Informatics, pages 140-154. Bonner Köllen Verlag, Munich, Germany. February 18-22 2008. [ bib | .pdf | Abstract ]
Performance predictions for software architectures can reveal performance bottlenecks and quantitatively support design decisions for different architectural alternatives. As software architects aim at reusing existing software components, their performance properties should be included into performance predictions without the need for manual modelling. However, most prediction approaches do not include automated support for modelling implemented components. Therefore, we propose a new reverse engineering approach, which generates Palladio performance models from Java code. In this paper, we focus on the static analysis of Java code, which we have implemented as an Eclipse plugin called Java2PCM. We evaluated our approach on a larger component-based software architecture, and show that a similar prediction accuracy can be achieved with generated models compared to completely manually specified ones.
[24] Benjamin Klatt and Klaus Krogmann. Software Extension Mechanisms. In Proceedings of the Thirteenth International Workshop on Component-Oriented Programming (WCOP'08), Karlsruhe, Germany, Ralf Reussner, Clemens Szyperski, and Wolfgang Weck, editors, 2008, number 2008-12 in Interner Bereich Universität Karlsruhe (TH), pages 11-18. [ bib | .pdf | Abstract ]
Industrial software projects not only have to deal with the number of features in the software system. Also issues like quality, flexibility, reusability, extensibility, developer and user acceptance are key factors in these days. An architecture paradigm targeting those issues are extension mechanisms which are for example used by component frameworks. The main contribution of this paper is to identify software extension mechanism characteristics derived from state-of-the-art software frameworks. These identified characteristics will benefit developers with selecting and creating extension mechanisms.
[25] Klaus Krogmann. Reengineering of Software Component Models to Enable Architectural Quality of Service Predictions. In Proceedings of the 12th International Workshop on Component Oriented Programming (WCOP 2007), Ralf H. Reussner, Clemens Szyperski, and Wolfgang Weck, editors, July 31 2007, volume 2007-13 of Interne Berichte, pages 23-29. Universität Karlsruhe (TH), Karlsruhe, Germany. July 31 2007. [ bib | http | Abstract ]
In this paper, we propose to relate model-based adaptation approaches with the Windows Workflow Foundation (WF) implementation platform, through a simple case study. We successively introduce a client/server system with mismatching components implemented in WF, our formal approach to work mismatch cases out, and the resulting WF adaptor. We end with some conclusions and a list of open issues.
[26] Klaus Krogmann. Reengineering von Software-Komponenten zur Vorhersage von Dienstgüte-Eigenschaften. In WSR2007, Rainer Gimnich and Andreas Winter, editors, May 2007, number 1/2007 in Mainzer Informatik-Berichte. Bad Honnef. [ bib | .pdf | Abstract ]
Die Verwendung von Komponenten ist ein anerkanntes Prinzip in der Software-Entwicklung. Dabei werden Software-Komponenten zumeist als Black-Boxes aufgefasst , deren Interna vor einem Komponenten-Verwender verborgen sind. Zahlreiche Architektur-Analyse-Verfahren, insbesondere solche zur Vorhersage von nicht-funktionalen Eigenschaften, benötigen jedoch Informationen über Interna (bspw. die Anzahl abgearbeiteter Schleifen oder Aufrufe externer Dienste), die von den vielen Komponentenmodellen nicht angeboten werden. Für Forscher, die aktuell mit der Analyse nicht-funktionaler Eigenschaften von komponentenbasierten Software-Architekturen beschäftigt sind, stellt sich die Frage, wie sie an dieses Wissen über Komponenten-Interna gelangen. Dabei müssen existierende Software-Komponenten analysiert werden, um die benötigten Informationen über das Innere der Komponenten derart zu rekonstruieren, dass sie für anschlie"sende Analyse-Verfahren nicht-funktionaler Eigenschaften genutzt werden können. Bestehende Verfahren konzentrieren sich auf die Erkennung von Komponenten oder bspw. das Reengineering von Sequenzdiagrammen gegebener Komponenten, fokussieren aber nicht auf die Informationen, die von Vorhersageverfahren für nicht-funktionale Eigenschaften benötigt werden. Der Beitrag dieses Papiers ist eine genaue Betrachtung der Informationen, die das Reengineering von Komponenten-Interna liefern muss, um für die Vorhersage der nicht-funktionalen Eigenschaft Performanz (im Sinne von Antwortzeit) nutzbringend zu sein. Dazu wird das Palladio Komponentenmodell [?] vorgestellt, das genau für diese Informationen vorbereitet ist. Schließlich wird ein Reengineering-Ansatz vorgestellt, der dazu geeignet ist, die benötigten Informationen zu gewinnen.
[27] Klaus Krogmann and Steffen Becker. A Case Study on Model-Driven and Conventional Software Development: The Palladio Editor. In Software Engineering 2007 - Beiträge zu den Workshops, Wolf-Gideon Bleek, Henning Schwentner, and Heinz Züllighoven, editors, March 27 2007, volume 106 of Lecture Notes in Informatics, pages 169-176. Series of the Gesellschaft für Informatik (GI). March 27 2007. [ bib | .pdf | Abstract ]
The actual benefits of model-driven approaches compared to code-centric development have not been systematically investigated. This paper presents a case study in which functional identical software was once developed in a code-centric, conventional style and once using Eclipse-based model-driven development tools. In our specific case, the model-driven approach could be carried in 11% of the time of the conventional approach, while simultaneously improving code quality.

Technical Reports

[1] Sascha Alpers, Henning Groenda, and Klaus Krogmann. Orientierungswissen für dienstanbieter zur standardauswahl für cloud-dienstleistungen. Technical Report 5, Bundesministerium für Wirtschaft und Energie (BMWi), Kompetenzzentrum Trusted Cloud, April 2015. [ bib | .pdf ]
[2] Zoya Durdik, Klaus Krogmann, and Felix Schad. Towards a generic approach for meta-model- and domain- independent model variability. Karlsruhe Reports in Informatics 2012,5, ISSN: 2190-4782, Karlsruhe, Germany, 2012. [ bib | http | Abstract ]
Variability originates from product line engineering and is an important part of today's software development. However, existing approaches mostly concentrate only on the variability in software product lines, and are usually not universal enough to consider variability in other development activities (e.g., modelling and hardware). Additionally, the complexity of variability in software is generally hard to capture and to handle. We propose a generic model-based solution which can generally handle variability on Ecore-based meta-models. The approach includes a formal description for variability, a way to express the configuration of variants, a compact DSL to describe the semantics of model variability and model-to-model transformations, and an engine which transforms input models into models with injected variability. This work provides a complete and domain-independent solution for variability handling. The applicability of the proposed approach will be validated in two case studies, considering the two independent domains of mobile platforms and architecture knowledge reuse.
[3] Ralf Reussner, Steffen Becker, Erik Burger, Jens Happe, Michael Hauck, Anne Koziolek, Heiko Koziolek, Klaus Krogmann, and Michael Kuperberg. The Palladio Component Model. Technical report, KIT, Fakultät für Informatik, Karlsruhe, 2011. [ bib | http | Abstract ]
This report introduces the Palladio Component Model (PCM), a novel software component model for business information systems, which is specifically tuned to enable model-driven quality-of-service (QoS, i.e., performance and reliability) predictions. The PCMs goal is to assess the expected response times, throughput, and resource utilization of component-based software architectures during early development stages. This shall avoid costly redesigns, which might occur after a poorly designed architecture has been implemented. Software architects should be enabled to analyse different architectural design alternatives and to support their design decisions with quantitative results from performance or reliability analysis tools.
[4] Johannes Stammel, Zoya Durdik, Klaus Krogmann, Roland Weiss, and Heiko Koziolek. Software Evolution for Industrial Automation Systems: Literature Overview. Karlsruhe Reports in Informatics 2011,2, Karlsruhe, Germany, 2011. [ bib | http | .pdf | Abstract ]
In this document we collect and classify literature with respect to software evolution. The main objective is to get an overview of approaches for the evolution of sustainable software systems with focus on the domain of industrial process control systems.
[5] Franz Brosch, Henning Groenda, Lucia Kapova, Klaus Krogmann, Michael Kuperberg, Anne Martens, Pierre Parrend, Ralf Reussner, Johannes Stammel, and Emre Taspolatoglu. Software-industrialisierung. Technical report, Fakultät für Informatik, Universität Karlsruhe, Karlsruhe, 2009. Interner Bericht. [ bib | http | Abstract ]
Die Industrialisierung der Software-Entwicklung ist ein zur Zeit sehr stark diskutiertes Thema. Es geht dabei vor allem um die Efizienzsteigerung durch die Steigerung des Standardisierungsgrades, des Automatisierungsgrades sowie eine Erhöhung der Arbeitsteilung. Dies wirkt sich einerseits auf die den Software- Systemen zu Grunde liegenden Architekturen, aber auch auf die Entwicklungsprozesse aus. So sind Service-orientierte Architekturen ein Beispiel für eine gesteigerte Standardisierung innerhalb von Software-Systemen. Es ist zu berücksichtigen, dass sich die Software-Branche von den klassischen produzierenden Industriezweigen dadurch unterscheidet, dass Software ein immaterielles Produkt ist und so ohne hohe Produktionskosten beliebig oft vervielfältigt werden kann. Trotzdem lassen sich viele Erkenntnisse aus den klassischen Industriezweigen auf die Software-Technik übertragen. Die Inhalte dieses Berichts stammen hauptsächlich aus dem Seminar " Software- Industrialisierung welches sich mit der Professionalisierung der Software- Entwi- cklung und des Software-Entwurfs beschäftigte. Während die klassische Software-Entwicklung wenig strukturiert ist und weder im Bezug auf Reproduzierbarkeit oder Qualitätssicherung erhöhten Anforderungen genügt, befindet sich die Software-Entwick- lung im Rahmen der Industrialisierung in einemWandel. Dazu zählen arbeitsteiliges Arbeiten, die Einführung von Entwicklungsprozessen mit vorhersagbaren Eigenschaften (Kosten, Zeitbedarf, ...), und in der Folge die Erstellung von Produkten mit garantierbaren Eigenschaften. Das Themenspektrum des Seminars umfasste dabei unter anderem: * Komponentenbasierte Software-Architekturen * Modellgetriebene Softwareentwicklung: Konzepte und Technologien * Industrielle Softwareentwicklungsprozesse und deren Bewertung Das Seminar wurde wie eine wissenschaftliche Konferenz organisiert: Die Einreichungen wurden in einem zweistufigen Peer-Review-Verfahren begutachtet. In der ersten Stufe wurde eine Begutachtung der studentischen Arbeiten durch Kommilitonen durchgeführt, in der zweiten Stufe eine Begutachtung durch die Betreuer. In verschiedenen Sessions wurden die Artikel wie bei einer Konferenz präsentiert. Die besten Beiträge wurden durch zwei Best Paper Awards ausgezeichnet. Diese gingen an Tom Beyer für seine Arbeit Realoptionen für Entscheidungen in der Software-Entwicklung, sowie an Philipp Meier für seine Arbeit Assessment Methods for Software Product Lines. Ergänzt wurden die Vorträge der Seminarteilnehmer durch zwei eingeladene Vorträge: Collin Rogowski von der 1&1 Internet AG stellte den agilen Softwareentwicklungsprozess beim Mail-Produkt GMX.COM vor. Heiko Koziolek, Wolfgang Mahnke und Michaela Saeftel von ABB referierten über das Thema Software Product Line Engineering anhand der bei ABB entwickelten Robotik-Applikationen.
[6] Franz Brosch, Thomas Goldschmidt, Henning Groenda, Lucia Kapova, Klaus Krogmann, Michael Kuperberg, Anne Martens, Christoph Rathfelder, Ralf Reussner, and Johannes Stammel. Software-industrialisierung. Interner bericht, Universität Karlsruhe, Fakultät für Informatik, Institut für Programmstrukturen und Datenorganisation, Karlsruhe, 2008. [ bib | http | Abstract ]
Die Industrialisierung der Software-Entwicklung ist ein zurzeit sehr stark diskutiertes Thema. Es geht dabei vor allem um die Efizienzsteigerung durch die Steigerung des Standardisierungsgrades, des Automatisierungsgrades sowie eine Erhöhung der Arbeitsteilung. Dies wirkt sich einerseits auf die den Software- Systemen zu Grunde liegenden Architekturen aber auch auf die Entwicklungsprozesse aus. So sind service-orientierte Architekturen ein Beispiel für eine gesteigerte Standardisierung innerhalb von Software-Systemen. Es ist zu berücksichtigen, dass sich die Software-Branche von den klassischen produzierenden Industriezweigen dadurch unterscheidet, dass Software ein immaterielles Produkt ist und so ohne hohe Produktionskosten beliebig oft vervielfältigt werden kann. Trotzdem lassen sich viele Erkenntnisse aus den klassischen Industriezweigen auf die Software-Technik übertragen. Die Inhalte dieses Berichts stammen hauptsächlich aus dem Seminar "Software- Industrialisierung welches sich mit der Professionalisierung der Software- Entwicklung und des Software-Entwurfs beschäftigte. Während die klassische Software-Entwicklung wenig strukturiert ist und weder im Bezug auf Reproduzierbarkeit oder Qualitätssicherung erhöhten Anforderungen genügt, befindet sich die Software-Entwicklung im Rahmen der Industrialisierung in einem Wandel. Dazu zählen arbeitsteiliges Arbeiten, die Einführung von Entwicklungsprozessen mit vorhersagbaren Eigenschaften (Kosten, Zeitbedarf, ...) und in der Folge die Erstellung von Produkten mit garantierbaren Eigenschaften. Das Themenspektrum des Seminars umfasste dabei unter anderem: * Software-Architekturen * Komponentenbasierte Software-Entwicklung * Modellgetriebene Entwicklung * Berücksichtigung von Qualitätseigenschaften in Entwicklungsprozessen Das Seminar wurde wie eine wissenschaftliche Konferenz organisiert: Die Einreichungen wurden in einem zweistufigen Peer-Review-Verfahren begutachtet. In der ersten Stufe wurde eine Begutachtung der studentischen Arbeiten durch Kommilitonen durchgeführt, in der zweiten Stufe eine Begutachtung durch die Betreuer. In verschiedenen Sessions wurden die Artikel an zwei Konferenztagen präsentiert. Der beste Beitrag wurde durch einen Best Paper Award ausgezeichnet. Dieser ging an Benjamin Klatt für seine Arbeit Software Extension Mechanisms, dem hiermit noch einmal herzlich zu dieser herausragenden Leistung gratuliert wird. Ergänzend zu den Vorträgen der Seminarteilnehmer wurde ein eingeladener Vortrag gehalten. Herr Florian Kaltner und Herr Tobias Pohl vom IBM-Entwicklungslabor gaben dabei dankenswerterweise in ihrem Vortrag Einblicke in die Entwicklung von Plugins für Eclipse sowie in die Build-Umgebung der Firmware für die zSeries Mainframe-Server.
[7] Steffen Becker, Tobias Dencker, Jens Happe, Heiko Koziolek, Klaus Krogmann, Martin Krogmann, Michael Kuperberg, Ralf Reussner, Martin Sygo, and Nikola Veber. Software-entwicklung mit eclipse. Technical report, Fakultät für Informatik, Universität Karlsruhe, Karlsruhe, Germany, 2007. Interner Bericht. [ bib | http | Abstract ]
Die Entwicklung von Software mit Hilfe von Eclipse gehört heute zu den Standard-Aufgaben eines Software-Entwicklers. Die Artikel in diesem technischen Bericht beschäftigen sich mit den umfangreichen Möglichkeiten des Eclipse-Frameworks, die nicht zuletzt auf Grund zahlreicher Erweiterungsmöglichkeiten mittels Plugins möglich sind. Dieser technische Bericht entstand aus einem Proseminar im Wintersemester 2006/2007.
[8] Steffen Becker, Jens Happe, Heiko Koziolek, Klaus Krogmann, Michael Kuperberg, Ralf Reussner, Sebastian Reichelt, Erik Burger, Igor Goussev, and Dimitar Hodzhev. Software-komponentenmodelle. Technical report, Fakultät für Informatik, Universität Karlsruhe, Karlsruhe, 2007. Interner Bericht. [ bib | http | Abstract ]
In der Welt der komponentenbasierten Software-Entwicklung werden Komponentenmodelle unter Anderem dazu eingesetzt, Software-Systeme mit vorhersagbaren Eigenschaften zu erstellen. Die Bandbreite reicht von Forschungs- bis zu Industrie-Modellen. In Abhängigkeit von den Zielen der Modelle werden unterschiedliche Aspekte von Software in ein Komponentenmodell abgebildet. In diesem technischen Bericht wird ein überblick über die heute verfügbaren Software-Komponentenmodelle vermittelt.
[9] Ralf H. Reussner, Steffen Becker, Heiko Koziolek, Jens Happe, Michael Kuperberg, and Klaus Krogmann. The Palladio Component Model. Interner Bericht 2007-21, Universität Karlsruhe (TH), 2007. October 2007. [ bib | .pdf ]

Theses

[1] Klaus Krogmann. Reconstruction of Software Component Architectures and Behaviour Models using Static and Dynamic Analysis. PhD thesis, Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany, 2010. [ bib | http | .pdf | Abstract ]
Model-based performance prediction systematically deals with the evaluation of software performance to avoid for example bottlenecks, estimate execution environment sizing, or identify scalability limitations for new usage scenarios. Such performance predictions require up-to-date software performance models. Still, no automated reverse engineering approach for software performance models at an architectural level exist. This book describes a new integrated reverse engineering approach for the reconstruction of software component architectures and software component behaviour models which are parameterised over hardware, component assembly, and control and data flow and as such can serve as software performance models due to the execution semantics of the target meta-model.
[2] Klaus Krogmann. Entwicklung und Transformation eines EMF-Modells des Palladio Komponenten-Meta-Modells. Master's thesis, University of Oldenburg, Germany, May 2006. [ bib | .pdf | Abstract ]
Modellgetriebene Entwicklung [20] verspricht auf Basis abtrakter Software-Modelle, die beispielsweise in einer Notation wie der Unified Modelling Langage vorliegen, automatisiert kompilierbaren Software-Quellcode erzeugen zu können. Dadurch könnten änderungen am Software-Modell ins kürzester Zeit in neuen Programmversionen resultieren und umgekehrt. Durch die Erhöhung des Grades der Automatisierung bei der Erzeugung von Software-Quellcode könnten Fehler minimiert werden. über die Trennung von Software-Modell und generiertem Software-Quellcode soll sich zusätzlich eine Plattformunabhängigkeit des Software-Modells erreichen lassen. Insgesamt soll auf diese Weise eine Steigerung der Effizienz von Software-Entwicklungsprozessen erreicht werden. Die Möglichkeiten der modellgetriebenen Entwicklung, auch unter dem Akronym MDA (Modell Driven Architecture) bekannt, stehen und fallen dabei mit der Mächtigkeit der verwendeten Werkzeuge. Je mehr Programmieraufwand durch Werkzeuge abgenommen wird, desto schneller kann die Entwicklung neuer Versionen erfolgen. Die Stärken von MDA werden vor allem in der permanenten Synchronisation zwischen Software-Modell und Software-Quellcode gesehen. Die Abstraktion von Software- Quellcode in Form eines Software-Modells bleibt stets konsistent zur Ausprägung als Software-Quellcode und umgekehrt ein Zugriff auf die weniger komplexe Abstraktion des Software-Quellcodes bleibt damit permanent bestehen. Die Ausgangsbasis für eine modellgetriebene Entwicklung (in Vorwärtsrichtung) bildet dabei ein Domänenmodell, dass die Modell-Elemente der Anwendungsdomäne beschreibt. Da nicht Instanzen von Modellen der Anwendungsdomäne beschrieben werden, sondern Modelle gültiger Modell-Instanzen, handelt es sich bei Domänenmodellen um Meta-Modelle.
[3] Klaus Krogmann. Generierung von Adaptoren. Individual project, University of Oldenburg, Germany, 2004. [ bib | .pdf | Abstract ]
Die vorliegende Ausarbeitung zum Individuellen Projekt bietet einen Einblick in die Erzeugung von Adaptoren auf Signaturniveau. Dabei wird eine differenzierte Abgrenzung zu anderen Gebieten der Adaptererzeugung vorgenommen, die sich unter anderem im einleitenden Abschnitt der Arbeit wiederfindet. Im Mittelpunkt der Betrachtung, die aus der Sicht der Komponentenbasierten Softwareentwicklung vorgenommen wird, steht vor allem der Erzeugungsvorgang von Adaptoren für Komponenten. Die Erzeugung von Adaptoren hängt unter anderem maßgeblich davon ab, wie viele Informationen über die zu adaptierenden Komponenten vorliegen. Betrachtet werden verschiedene Niveaus des Informationsreichtums, die aus unterschiedlicher wissenschaftlicher Sicht zu verschiedenen Ergebnissen führen können. Für den Kontext des erstellten Adapter-Generators wird vor allem das Konzept der Konverter näher beleuchtet. Durch diesen vielseitigen Mechanismus läßt sich die Mächtigkeit der erzeugbaren Adapter drastisch erhöhen. Aufgezeigt wird neben der konkreten Implementierung auch ein Vergleich mit anderen Adaptionsansätzen.