Home | Sitemap | Index | Contact | Legals | KIT

Publications of Michael Kuperberg

Books/Book Chapters and edited Proceedings

[1] Michael Kuperberg. Dependability Metrics, volume 4909 of Lecture Notes in Computer Science, chapter Markov Models, pages 48-55. Springer-Verlag Berlin Heidelberg, 2008. [ bib | .pdf | Abstract ]
This chapter gives a brief overview overMarkov models, a useful formalism to analyse stochastic systems.

Refereed journal articles

[1] Michael Hauck, Michael Kuperberg, Nikolaus Huber, and Ralf Reussner. Deriving performance-relevant infrastructure properties through model-based experiments with ginpex. Software & Systems Modeling, pages 1-21, 2013, Springer-Verlag. [ bib | DOI | http | Abstract ]
To predict the performance of an application, it is crucial to consider the performance of the underlying infrastructure. Thus, to yield accurate prediction results, performance-relevant properties and behaviour of the infrastructure have to be integrated into performance models. However, capturing these properties is a cumbersome and error-prone task, as it requires carefully engineered measurements and experiments. Existing approaches for creating infrastructure performance models require manual coding of these experiments, or ignore the detailed properties in the models. The contribution of this paper is the Ginpex approach, which introduces goal-oriented and model-based specification and generation of executable performance experiments for automatically detecting and quantifying performance-relevant infrastructure properties. Ginpex provides a metamodel for experiment specification and comes with predefined experiment templates that provide automated experiment execution on the target platform and also automate the evaluation of the experiment results. We evaluate Ginpex using three case studies, where experiments are executed to quantify various infrastructure properties.
[2] Klaus Krogmann, Michael Kuperberg, and Ralf Reussner. Using Genetic Search for Reverse Engineering of Parametric Behaviour Models for Performance Prediction. IEEE Transactions on Software Engineering, 36(6):865-877, 2010, IEEE. [ bib | DOI | .pdf | Abstract ]
In component-based software engineering, existing components are often re-used in new applications. Correspondingly, the response time of an entire component-based application can be predicted from the execution durations of individual component services. These execution durations depend on the runtime behaviour of a component, which itself is influenced by three factors: the execution platform, the usage profile, and the component wiring. To cover all relevant combinations of these influencing factors, conventional prediction of response times requires repeated deployment and measurements of component services for all such combinations, incurring a substantial effort. This paper presents a novel comprehensive approach for reverse engineering and performance prediction of components. In it, genetic programming is utilised for reconstructing a behaviour model from monitoring data, runtime bytecode counts and static bytecode analysis. The resulting behaviour model is parametrised over all three performance-influencing factors, which are specified separately. This results in significantly fewer measurements: the behaviour model is reconstructed only once per component service, and one application-independent bytecode benchmark run is sufficient to characterise an execution platform. To predict the execution durations for a concrete platform, our approach combines the behaviour model with platform-specific benchmarking results. We validate our approach by predicting the performance of a file sharing application.

Refereed conference/Workshop papers

[1] Michael Hauck, Michael Kuperberg, Nikolaus Huber, and Ralf Reussner. Ginpex: Deriving Performance-relevant Infrastructure Properties Through Goal-oriented Experiments. In Proceedings of the 7th ACM SIGSOFT International Conference on the Quality of Software Architectures (QoSA 2011), June 20-24, 2011, pages 53-62. ACM, New York, NY, USA. June 2011. [ bib | DOI | www: | .pdf ]
[2] Michael Kuperberg, Martin Krogmann, and Ralf Reussner. Metric-based Selection of Timer Methods for Accurate Measurements. In Proceedings of the 2nd ACM/SPEC International Conference on Performance Engineering, Karlsruhe, Germany, 2011, ICPE '11, pages 151-156. ACM, New York, NY, USA. 2011. [ bib | DOI | http | .pdf | Abstract ]
Performance measurements are often concerned with accurate recording of timing values, which requires timer methods of high quality. Evaluating the quality of a given timer method or performance counter involves analysing several properties, such as accuracy, invocation cost and timer stability. These properties are metrics with platform-dependent values, and ranking and selecting timer methods requires comparisons using multidimensional metric sets, which make the comparisons ambiguous and unnecessary complex. To solve this problem, this paper proposes a new unified metric that allows for a simpler comparison. The one-dimensional metric is designed to capture fine-granular differences between timer methods, and normalises accuracy and other quality attributes by using CPU cycles instead of time units. The proposed metric is evaluated on all timer methods provided by Java and .NET platform APIs.
[3] Michael Kuperberg and Ralf Reussner. Analysing the Fidelity of Measurements Performed With Hardware Performance Counters. In Proceedings of the International Conference on Software Engineering 2011 (ICPE'11), March 14-16, 2011, Karlsruhe, Germany, 2011. [ bib | .pdf | Abstract ]
Performance evaluation requires accurate and dependable measurements of timing values. Such measurements are usually made using timer methods, but these methods are often too coarse-grained and too inaccurate. Thus, direct usage of hardware performance counters is frequently used for fine-granular measurements due to higher accuracy. However, direct access to these counters may be misleading on multicore computers because cores can be paused or core affinity changed by the operating system, resulting in misleading counter values. The contribution of this paper is the demonstration of an additional, significant flaw arising from the direct use of hardware performance counters. We demonstrate that using JNI and assembler instructions to access the Timestamp Counter from Java applications can result in grossly wrong values, even in single-threaded scenarios.
[4] Michael Kuperberg and Fouad Omri. Automated Benchmarking of Java APIs. In Proceedings of Software Engineering 2010 (SE2010), February 2010. [ bib | .pdf | Abstract ]
Performance is an extra-functional property of software systems which is often critical for achieving sufficient scalability or efficient resource utilisation. As many applications are built using application programmer interfaces (APIs) of execution platforms and external components, the performance of the used API implementations has a strong impact on the performance of the application itself. Yet the sheer size and complexity of today's APIs make it hard to manually benchmark them, while many semantical constraints and requirements (on method parameters, etc.) make it complicated to automate the creation of API benchmarks. Benchmarking the whole API is necessary since it is in the majority of the cases hard to exactly specify which parts of the API would be used by a given application. Additionally, modern execution platforms such as the Java Virtual Machine perform extensive nondeterministic runtime optimisations, which need to be considered and quantified for realistic benchmarking. In this paper, we present an automated solution for benchmarking any large APIs that are written in the Java programming language, not just the Java Platform API. Our implementation induces the optimisations of the Just-In-Time compiler to obtain realistic benchmarking results. We evaluate the approach on a large subset of the Java Platform API exposed by the base libraries of the Java Virtual Machine.
[5] Michael Hauck, Michael Kuperberg, Klaus Krogmann, and Ralf Reussner. Modelling Layered Component Execution Environments for Performance Prediction. In Proceedings of the 12th International Symposium on Component Based Software Engineering (CBSE 2009), 2009, number 5582 in LNCS, pages 191-208. Springer. 2009. [ bib | DOI | .html | .pdf | Abstract ]
Software architects often use model-based techniques to analyse performance (e.g. response times), reliability and other extra-functional properties of software systems. These techniques operate on models of software architecture and execution environment, and are applied at design time for early evaluation of design alternatives, especially to avoid implementing systems with insufficient quality. Virtualisation (such as operating system hypervisors or virtual machines) and multiple layers in execution environments (e.g. RAID disk array controllers on top of hard disks) are becoming increasingly popular in reality and need to be reflected in the models of execution environments. However, current component meta-models do not support virtualisation and cannot model individual layers of execution environments. This means that the entire monolithic model must be recreated when different implementations of a layer must be compared to make a design decision, e.g. when comparing different Java Virtual Machines. In this paper, we present an extension of an established model-based performance prediction approach and associated tools which allow to model and predict state-of-the-art layered execution environments, such as disk arrays, virtual machines, and application servers. The evaluation of the presented approach shows its applicability and the resulting accuracy of the performance prediction while respecting the structure of the modelled resource environment.
[6] Klaus Krogmann, Christian M. Schweda, Sabine Buckl, Michael Kuperberg, Anne Martens, and Florian Matthes. Improved Feedback for Architectural Performance Prediction using Software Cartography Visualizations. In Architectures for Adaptive Systems (Proceedings of QoSA 2009), Raffaela Mirandola, Ian Gorton, and Christine Hofmeister, editors, 2009, volume 5581 of Lecture Notes in Computer Science, pages 52-69. Springer. 2009, Best Paper Award. [ bib | DOI | http | Abstract ]
Software performance engineering provides techniques to analyze and predict the performance (e.g., response time or resource utilization) of software systems to avoid implementations with insufficient performance. These techniques operate on models of software, often at an architectural level, to enable early, design-time predictions for evaluating design alternatives. Current software performance engineering approaches allow the prediction of performance at design time, but often provide cryptic results (e.g., lengths of queues). These prediction results can be hardly mapped back to the software architecture by humans, making it hard to derive the right design decisions. In this paper, we integrate software cartography (a map technique) with software performance engineering to overcome the limited interpretability of raw performance prediction results. Our approach is based on model transformations and a general software visualization approach. It provides an intuitive mapping of prediction results to the software architecture which simplifies design decisions. We successfully evaluated our approach in a quasi experiment involving 41 participants by comparing the correctness of performance-improving design decisions and participants' time effort using our novel approach to an existing software performance visualization.
[7] Michael Kuperberg. FOBIC: A Platform-Independent Performance Metric based on Dynamic Java Bytecode Counts. In Proceedings of the 2008 Dependability Metrics Research Workshop, Technical Report TR-2009-002, Felix C. Freiling, Irene Eusgeld, and Ralf Reussner, editors, November 10, 2008, Mannheim, Germany, 2009, pages 7-11. Department of Computer Science, University of Mannheim. [ bib | .pdf | Abstract ]
ICS include supervisory control and data acquisition (SCADA) systems, distributed control systems (DCS), and other control system configurations such as skid-mounted Programmable Logic Controllers (PLC) as are often found in the industrial control sector. In contrast to traditional information processing systems logic executing in ICS has a direct effect on the physical world. These control systems are critical for the operation of complex infrastructures that are often highly interconnected and thus mutually dependent systems. Numerous methodical approaches aim at modeling, analysis and simulation of single systems� behavior. However, modeling the interdependencies between different systems and describing their complex behavior by simulation is still an open issue. Although different modeling approaches from classic network theory to bio-inspired methods can be found in scientific literature a comprehensive method for modeling and simulation of interdependencies among complex systems has still not been established. An overall model is needed to provide security and reliability assessment taking into account various kinds of threats and failures. These metrics are essential for a vulnerability analysis. Vulnerability of a critical infrastructure is defined as the presence of flaws or weaknesses in its design, implementation, operation and/or management that render it susceptible to destruction or incapacitation by a threat, in spite of its capacity to absorb and recover (�resilience�). A significant challenge associated with this model may be to create �what-if� scenarios for the analysis of interdependencies. Interdependencies affect the consequences of single or multiple failures or disruption in interconnected systems. The different types of interdependencies can induce feedback loops which have accelerating or retarding effects on a systems response as observed in system dynamics. Threats to control systems can come from numerous sources, including hostile governments, terrorist groups, disgruntled employees, malicious intruders, complexities, accidents, natural disasters and malicious or accidental actions by insiders. The threats and failures can impact ICS themselves as well as underlying (controlled) systems. In previous work seven evaluation criteria have been defined and eight good praxis methods have been selected and are briefly described. Analysis of these techniques is undertaken and their suitability for modeling and simulation of interdependent critical infrastructures in general is hypothesized. With
[8] Michael Kuperberg, Martin Krogmann, and Ralf Reussner. TimerMeter: Quantifying Accuracy of Software Times for System Analysis. In Proceedings of the 6th International Conference on Quantitative Evaluation of SysTems (QEST) 2009, 2009. [ bib | .pdf ]
[9] Michael Kuperberg, Fouad Omri, and Ralf Reussner. Using Heuristics to Automate Parameter Generation for Benchmarking of Java Methods. In Proceedings of the 6th International Workshop on Formal Engineering approaches to Software Components and Architectures, York, UK, 28th March 2009 (ETAPS 2009, 12th European Joint Conferences on Theory and Practice of Software), 2009. [ bib | .pdf | Abstract ]
Automated generation of method parameters is needed in benchmarking scenarios where manual or random generation of parameters are not suitable, do not scale or are too costly. However, for a method to execute correctly, the generated input parameters must not violate implicit semantical constraints, such as ranges of numeric parameters or the maximum length of a collection. For most methods, such constraints have no formal documentation, and human-readable documentation of them is usually incomplete and ambiguous. Random search of appropriate parameter values is possible but extremely ineffective and does not pay respect to such implicit constraints. Also, the role of polymorphism and of the method invocation targets is often not taken into account. Most existing approaches that claim automation focus on a single method and ignore the structure of the surrounding APIs where those exist. In this paper, we present HEURIGENJ, a novel heuristics-based approach for automatically finding legal and appropriate method input parameters and invocation targets, by approximating the implicit constraints imposed on them. Our approach is designed to support systematic benchmarking of API methods written in the Java language. We evaluate the presented approach by applying it to two frequently-used packages of the Java platform API, and demonstrating its coverage and effectiveness.
[10] Michael Kuperberg, Klaus Krogmann, and Ralf Reussner. Performance Prediction for Black-Box Components using Reengineered Parametric Behaviour Models. In Proceedings of the 11th International Symposium on Component Based Software Engineering (CBSE 2008), Karlsruhe, Germany, 14th-17th October 2008, October 2008, volume 5282 of Lecture Notes in Computer Science, pages 48-63. Springer-Verlag Berlin Heidelberg. October 2008. [ bib | .pdf | Abstract ]
In component-based software engineering, the response time of an entire application is often predicted from the execution durations of individual component services. However, these execution durations are specific for an execution platform (i.e. its resources such as CPU) and for a usage profile. Reusing an existing component on different execution platforms up to now required repeated measurements of the concerned components for each relevant combination of execution platform and usage profile, leading to high effort. This paper presents a novel integrated approach that overcomes these limitations by reconstructing behaviour models with platform-independent resource demands of bytecode components. The reconstructed models are parameterised over input parameter values. Using platform-specific results of bytecode benchmarking, our approach is able to translate the platform-independent resource demands into predictions for execution durations on a certain platform. We validate our approach by predicting the performance of a file sharing application.
[11] Klaus Krogmann, Michael Kuperberg, and Ralf Reussner. Reverse Engineering of Parametric Behavioural Service Performance Models from Black-Box Components. In MDD, SOA und IT-Management (MSI 2008), Ulrike Steffens, Jan Stefan Addicks, and Niels Streekmann, editors, September 2008, pages 57-71. GITO Verlag, Oldenburg. September 2008. [ bib | .pdf | Abstract ]
Integrating heterogeneous software systems becomes increasingly important. It requires combining existing components to form new applications. Such new applications are required to satisfy non-functional properties, such as performance. Design-time performance prediction of new applications built from existing components helps to compare design decisions before actually implementing them to the full, avoiding costly prototype and glue code creation. But design-time performance prediction requires understanding and modeling of data flow and control flow accross component boundaries, which is not given for most black-box components. If, for example one component processes and forwards files to other components, this effect should be an explicit model parameter to correctly capture its performance impact. This impact should also be parameterised over data, but no reverse engineering approach exists to recover such dependencies. In this paper, we present an approach that allows reverse engineering of such behavioural models, which is applicable for blackbox components. By runtime monitoring and application of genetic programming, we recover functional dependencies in code, which then are expressed as parameterisation in the output model. We successfully validated our approach in a case study on a file sharing application, showing that all dependencies could correctly be reverse engineered from black-box components.
[12] Michael Kuperberg, Martin Krogmann, and Ralf Reussner. ByCounter: Portable Runtime Counting of Bytecode Instructions and Method Invocations. In Proceedings of the 3rd International Workshop on Bytecode Semantics, Verification, Analysis and Transformation, Budapest, Hungary, 5th April 2008 (ETAPS 2008, 11th European Joint Conferences on Theory and Practice of Software), 2008. [ bib | .pdf | Abstract ]
For bytecode-based applications, runtime instruction counts can be used as a platform- independent application execution metric, and also can serve as the basis for bytecode-based performance prediction. However, different instruction types have different execution durations, so they must be counted separately, and method invocations should be identified and counted because of their substantial contribution to the total application performance. For Java bytecode, most JVMs and profilers do not provide such functionality at all, and existing bytecode analysis frameworks require expensive JVM instrumentation for instruction-level counting. In this paper, we present ByCounter, a lightweight approach for exact runtime counting of executed bytecode instructions and method invocations. ByCounter significantly reduces total counting costs by instrumenting only the application bytecode and not the JVM, and it can be used without modifications on any JVM. We evaluate the presented approach by successfully applying it to multiple Java applications on different JVMs, and discuss the runtime costs of applying ByCounter to these cases.
[13] Michael Kuperberg and Steffen Becker. Predicting Software Component Performance: On the Relevance of Parameters for Benchmarking Bytecode and APIs. In Proceedings of the 12th International Workshop on Component Oriented Programming (WCOP 2007), Ralf Reussner, Clemens Czyperski, and Wolfgang Weck, editors, July 2007. [ bib | .pdf | Abstract ]
Performance prediction of component-based software systems is needed for systematic evaluation of design decisions, but also when an application�s execution system is changed. Often, the entire application cannot be benchmarked in advance on its new execution system due to high costs or because some required services cannot be provided there. In this case, performance of bytecode instructions or other atomic building blocks of components can be used for performance prediction. However, the performance of bytecode instructions depends not only on the execution system they use, but also on their parameters, which are not considered by most existing research. In this paper, we demonstrate that parameters cannot be ignored when considering Java bytecode. Consequently, we outline a suitable benchmarking approach and the accompanying challenges.
[14] Michael Kuperberg. Influence of Execution Environments on the Performance of Software Components. In Proceedings of the 2nd International Research Training Groups Workshop, Dagstuhl, Germany, November 6 - 8, 2006, Jens Happe, Heiko Koziolek, and Matthias Rohr, editors, 2006, volume 3 of Reihe Trustworthy Software Systems. [ bib | http | Abstract ]
Performance prediction of component-based software systems is needed for systematic evaluation of design decisions, but also when an application's execution system is changed. Often, the entire application cannot be benchmarked in advance on its new execution system due to high costs or because some required services cannot be provided there. In this case, performance of bytecode instructions or other atomic building blocks of components can be used for performance prediction. However, the performance of bytecode instructions depends not only on the execution system they use, but also on their parameters, which are not considered by most existing research. In this paper, we demonstrate that parameters cannot be ignored when considering Java bytecode. Consequently, we outline a suitable benchmarking approach and the accompanying challenges.
[15] Dan A. Simovici, Namita Singla, and Michael Kuperberg. Metric Incremental Clustering of Nominal Data. In The Fourth IEEE International Conference on Data Mining, 2004, pages 523-526. Brighton, UK. [ bib | .pdf | Abstract ]
We present an algorithm for clustering nominal data that is based on a metric on the set of partitions of a finite set of objects; this metric is defined starting from a lower valuation of the lattice of partitions. The proposed algorithm seeks to determine a clustering partition such that the total distance between this partition and the partitions determined by the attributes of the objects has a local minimum. The resulting clustering is quite stable relative to the ordering of the objects.

Technical Reports

[1] Michael Kuperberg, Nikolas Roman Herbst, Joakim Gunnarson von Kistowski, and Ralf Reussner. Defining and Quantifying Elasticity of Resources in Cloud Computing and Scalable Platforms. Technical report, Karlsruhe Institute of Technology (KIT), Am Fasanengarten 5, 76131 Karlsruhe, Germany, 2011. [ bib | http | .pdf | Abstract ]
Elasticity is the ability of a software system to dynamically scale the amount of the resources it provides to clients as their workloads increase or decrease. Elasticity is praised as a key advantage of cloud computing, where computing resources are dynamically added and released. However, there exists no concise or formal definition of elasticity, and thus no approaches to quantify it have been developed so far. Existing work on cloud computing is limited to the technical view of implementing elastic systems, and definitions or scalability have not been extended to cover elasticity. In this report, we present a detailed discussion of elasticity, propose techniques for quantifying and measuring it, and outline next steps to be taken for enabling comparisons between cloud computing offerings on the basis of elasticity. We also present preliminary work on measuring elasticity of resource pools provided by the Java Virtual Machine.
[2] Ralf Reussner, Steffen Becker, Erik Burger, Jens Happe, Michael Hauck, Anne Koziolek, Heiko Koziolek, Klaus Krogmann, and Michael Kuperberg. The Palladio Component Model. Technical report, KIT, Fakultät für Informatik, Karlsruhe, 2011. [ bib | http | Abstract ]
This report introduces the Palladio Component Model (PCM), a novel software component model for business information systems, which is specifically tuned to enable model-driven quality-of-service (QoS, i.e., performance and reliability) predictions. The PCMs goal is to assess the expected response times, throughput, and resource utilization of component-based software architectures during early development stages. This shall avoid costly redesigns, which might occur after a poorly designed architecture has been implemented. Software architects should be enabled to analyse different architectural design alternatives and to support their design decisions with quantitative results from performance or reliability analysis tools.
[3] Franz Brosch, Henning Groenda, Lucia Kapova, Klaus Krogmann, Michael Kuperberg, Anne Martens, Pierre Parrend, Ralf Reussner, Johannes Stammel, and Emre Taspolatoglu. Software-industrialisierung. Technical report, Fakultät für Informatik, Universität Karlsruhe, Karlsruhe, 2009. Interner Bericht. [ bib | http | Abstract ]
Die Industrialisierung der Software-Entwicklung ist ein zur Zeit sehr stark diskutiertes Thema. Es geht dabei vor allem um die Efizienzsteigerung durch die Steigerung des Standardisierungsgrades, des Automatisierungsgrades sowie eine Erhöhung der Arbeitsteilung. Dies wirkt sich einerseits auf die den Software- Systemen zu Grunde liegenden Architekturen, aber auch auf die Entwicklungsprozesse aus. So sind Service-orientierte Architekturen ein Beispiel für eine gesteigerte Standardisierung innerhalb von Software-Systemen. Es ist zu berücksichtigen, dass sich die Software-Branche von den klassischen produzierenden Industriezweigen dadurch unterscheidet, dass Software ein immaterielles Produkt ist und so ohne hohe Produktionskosten beliebig oft vervielfältigt werden kann. Trotzdem lassen sich viele Erkenntnisse aus den klassischen Industriezweigen auf die Software-Technik übertragen. Die Inhalte dieses Berichts stammen hauptsächlich aus dem Seminar " Software- Industrialisierung welches sich mit der Professionalisierung der Software- Entwi- cklung und des Software-Entwurfs beschäftigte. Während die klassische Software-Entwicklung wenig strukturiert ist und weder im Bezug auf Reproduzierbarkeit oder Qualitätssicherung erhöhten Anforderungen genügt, befindet sich die Software-Entwick- lung im Rahmen der Industrialisierung in einemWandel. Dazu zählen arbeitsteiliges Arbeiten, die Einführung von Entwicklungsprozessen mit vorhersagbaren Eigenschaften (Kosten, Zeitbedarf, ...), und in der Folge die Erstellung von Produkten mit garantierbaren Eigenschaften. Das Themenspektrum des Seminars umfasste dabei unter anderem: * Komponentenbasierte Software-Architekturen * Modellgetriebene Softwareentwicklung: Konzepte und Technologien * Industrielle Softwareentwicklungsprozesse und deren Bewertung Das Seminar wurde wie eine wissenschaftliche Konferenz organisiert: Die Einreichungen wurden in einem zweistufigen Peer-Review-Verfahren begutachtet. In der ersten Stufe wurde eine Begutachtung der studentischen Arbeiten durch Kommilitonen durchgeführt, in der zweiten Stufe eine Begutachtung durch die Betreuer. In verschiedenen Sessions wurden die Artikel wie bei einer Konferenz präsentiert. Die besten Beiträge wurden durch zwei Best Paper Awards ausgezeichnet. Diese gingen an Tom Beyer für seine Arbeit Realoptionen für Entscheidungen in der Software-Entwicklung, sowie an Philipp Meier für seine Arbeit Assessment Methods for Software Product Lines. Ergänzt wurden die Vorträge der Seminarteilnehmer durch zwei eingeladene Vorträge: Collin Rogowski von der 1&1 Internet AG stellte den agilen Softwareentwicklungsprozess beim Mail-Produkt GMX.COM vor. Heiko Koziolek, Wolfgang Mahnke und Michaela Saeftel von ABB referierten über das Thema Software Product Line Engineering anhand der bei ABB entwickelten Robotik-Applikationen.
[4] Franz Brosch, Thomas Goldschmidt, Henning Groenda, Lucia Kapova, Klaus Krogmann, Michael Kuperberg, Anne Martens, Christoph Rathfelder, Ralf Reussner, and Johannes Stammel. Software-industrialisierung. Interner bericht, Universität Karlsruhe, Fakultät für Informatik, Institut für Programmstrukturen und Datenorganisation, Karlsruhe, 2008. [ bib | http | Abstract ]
Die Industrialisierung der Software-Entwicklung ist ein zurzeit sehr stark diskutiertes Thema. Es geht dabei vor allem um die Efizienzsteigerung durch die Steigerung des Standardisierungsgrades, des Automatisierungsgrades sowie eine Erhöhung der Arbeitsteilung. Dies wirkt sich einerseits auf die den Software- Systemen zu Grunde liegenden Architekturen aber auch auf die Entwicklungsprozesse aus. So sind service-orientierte Architekturen ein Beispiel für eine gesteigerte Standardisierung innerhalb von Software-Systemen. Es ist zu berücksichtigen, dass sich die Software-Branche von den klassischen produzierenden Industriezweigen dadurch unterscheidet, dass Software ein immaterielles Produkt ist und so ohne hohe Produktionskosten beliebig oft vervielfältigt werden kann. Trotzdem lassen sich viele Erkenntnisse aus den klassischen Industriezweigen auf die Software-Technik übertragen. Die Inhalte dieses Berichts stammen hauptsächlich aus dem Seminar "Software- Industrialisierung welches sich mit der Professionalisierung der Software- Entwicklung und des Software-Entwurfs beschäftigte. Während die klassische Software-Entwicklung wenig strukturiert ist und weder im Bezug auf Reproduzierbarkeit oder Qualitätssicherung erhöhten Anforderungen genügt, befindet sich die Software-Entwicklung im Rahmen der Industrialisierung in einem Wandel. Dazu zählen arbeitsteiliges Arbeiten, die Einführung von Entwicklungsprozessen mit vorhersagbaren Eigenschaften (Kosten, Zeitbedarf, ...) und in der Folge die Erstellung von Produkten mit garantierbaren Eigenschaften. Das Themenspektrum des Seminars umfasste dabei unter anderem: * Software-Architekturen * Komponentenbasierte Software-Entwicklung * Modellgetriebene Entwicklung * Berücksichtigung von Qualitätseigenschaften in Entwicklungsprozessen Das Seminar wurde wie eine wissenschaftliche Konferenz organisiert: Die Einreichungen wurden in einem zweistufigen Peer-Review-Verfahren begutachtet. In der ersten Stufe wurde eine Begutachtung der studentischen Arbeiten durch Kommilitonen durchgeführt, in der zweiten Stufe eine Begutachtung durch die Betreuer. In verschiedenen Sessions wurden die Artikel an zwei Konferenztagen präsentiert. Der beste Beitrag wurde durch einen Best Paper Award ausgezeichnet. Dieser ging an Benjamin Klatt für seine Arbeit Software Extension Mechanisms, dem hiermit noch einmal herzlich zu dieser herausragenden Leistung gratuliert wird. Ergänzend zu den Vorträgen der Seminarteilnehmer wurde ein eingeladener Vortrag gehalten. Herr Florian Kaltner und Herr Tobias Pohl vom IBM-Entwicklungslabor gaben dabei dankenswerterweise in ihrem Vortrag Einblicke in die Entwicklung von Plugins für Eclipse sowie in die Build-Umgebung der Firmware für die zSeries Mainframe-Server.
[5] Steffen Becker, Tobias Dencker, Jens Happe, Heiko Koziolek, Klaus Krogmann, Martin Krogmann, Michael Kuperberg, Ralf Reussner, Martin Sygo, and Nikola Veber. Software-entwicklung mit eclipse. Technical report, Fakultät für Informatik, Universität Karlsruhe, Karlsruhe, Germany, 2007. Interner Bericht. [ bib | http | Abstract ]
Die Entwicklung von Software mit Hilfe von Eclipse gehört heute zu den Standard-Aufgaben eines Software-Entwicklers. Die Artikel in diesem technischen Bericht beschäftigen sich mit den umfangreichen Möglichkeiten des Eclipse-Frameworks, die nicht zuletzt auf Grund zahlreicher Erweiterungsmöglichkeiten mittels Plugins möglich sind. Dieser technische Bericht entstand aus einem Proseminar im Wintersemester 2006/2007.
[6] Steffen Becker, Jens Happe, Heiko Koziolek, Klaus Krogmann, Michael Kuperberg, Ralf Reussner, Sebastian Reichelt, Erik Burger, Igor Goussev, and Dimitar Hodzhev. Software-komponentenmodelle. Technical report, Fakultät für Informatik, Universität Karlsruhe, Karlsruhe, 2007. Interner Bericht. [ bib | http | Abstract ]
In der Welt der komponentenbasierten Software-Entwicklung werden Komponentenmodelle unter Anderem dazu eingesetzt, Software-Systeme mit vorhersagbaren Eigenschaften zu erstellen. Die Bandbreite reicht von Forschungs- bis zu Industrie-Modellen. In Abhängigkeit von den Zielen der Modelle werden unterschiedliche Aspekte von Software in ein Komponentenmodell abgebildet. In diesem technischen Bericht wird ein überblick über die heute verfügbaren Software-Komponentenmodelle vermittelt.
[7] Ralf H. Reussner, Steffen Becker, Heiko Koziolek, Jens Happe, Michael Kuperberg, and Klaus Krogmann. The Palladio Component Model. Interner Bericht 2007-21, Universität Karlsruhe (TH), 2007. October 2007. [ bib | .pdf ]
[8] Steffen Becker, Aleksander Dikanski, Nils Drechsel, Aboubakr Achraf El Ghazi, Jens Happe, Ihssane El-Oudghiri, Heiko Koziolek, Michael Kuperberg, Andreas Rentschler, Ralf H. Reussner, Roman Sinawski, Matthias Thoma, and Marko Willsch. Modellgetriebene Software-Entwicklung - Architekturen, Muster und Eclipse-basierte MDA. Technical report, Universität Karlsruhe (TH), 2006. [ bib | http | Abstract ]
Modellgetriebene Software-Entwicklung ist in den letzten Jahren insbesondere unter Schlagworten wie MDA und MDD zu einem Thema von allgemeinem Interesse für die Software-Branche geworden. Dabei ist ein Trend weg von der Code-zentrierten Software-Entwicklung hin zum (Architektur-) Modell im Mittelpunkt der Software- Entwicklung festzustellen. Modellgetriebene Software-Entwicklung verspricht eine stetige automatisierte Synchronisation von Software-Modellen verschiedenster Ebenen. Damit einher geht eine mögliche Verkürzung von Entwicklungszyklen und mehr Produktivität. Primär wird nicht mehr reiner Quellcode entwickelt, sondern Modelle und Transformationen übernehmen als eine höhere Abstraktionsebene die Rolle der Entwicklungssprache für Software-Produkte. Derweil ist eine Evolution von Werkzeugen zur modellgetriebenen Entwicklung festzustellen, die einen zusätzlichen Gewinn an Produktivität und Effizienz ermöglichen sollen. Waren die Werkzeuge zur Jahrtausendwende in ihrer Mächtigkeit noch stark eingeschränkt, weil die Transformationssprachen nur eine begrenzte Ausdrucksstärke besaßen und die verfügbaren Werkzeuge eine nur geringe Integration von modellgetriebenen Entwicklungsprozessen boten, so ist heute mit den Eclipse-basiertenWerkzeugen rund um EMF ein deutlicher Fortschritt spürbar. In der Eclipse-Plattform werden dabei als Plugins verschiedenste Aspekte der modellgetriebenen Entwicklung vereint: � Modellierungswerkzeuge zur Erstellung von Software-Architekturen � Frameworks für Software-Modelle � Erstellung und Bearbeitung von Transformationen � Durchführung von Transformationen � Entwicklung von Quellcode Der Seminartitel enthält eine Reihe von Schlagworten: �MDA, Architekturen, Muster, Eclipse�. Unter dem Dach von MDA ergeben sich zwischen diesen Schlagworten Zusammenhänge, die im Folgenden kurz skizziert werden. Software-Architekturen stellen eine allgemeine Form von Modell für Software dar. Sie sind weder auf eine Beschreibungssprache noch auf eine bestimmte Domänen beschränkt. Im Zuge der Bemühungen modellgetriebener Entwicklung lassen sich hier Entwicklungen hin zu Standard-Beschreibungssprachen wie UML aber auch die Einführung von domänen-spezifischen Sprachen (DSL) erkennen. Auf diesen weiter formalisierten Beschreibungen von Software lassen sich schließlich Transformationen anwenden. Diese können entweder zu einem weiteren Modell (�Model-to-Model�) oder einer textuellen Repräsentation (�Model-to-Text�) erfolgen. In beiden Fällen spielen Muster eine wichtige Rolle. Transformationen kapseln in gewisser Weise wiederholt anwendbares Entwurfs-Wissen (�Muster�) in parametrisierbaren Schablonen. Eclipse stellt schließlich eine freie Plattform dar, die in letzter Zeit zunehmend Unterstützung für modellgetriebene Entwicklung bietet. In die Bemühungen zur Unterstützung modellgetriebener Entwicklung fällt auch das im Mai 2006 angekündigte �Eclipse Modeling Project�, das als �top level project� auf die Evolution und Verbreitung modellgetriebener Entwicklungs-Technologien in Eclipse zielt. Das Seminar wurde wie eine wissenschaftliche Konferenz organisiert: Die Einreichungen wurden in einem peer-to-peer-Verfahren begutachtet (vor der Begutachtung durch den Betreuer) und in verschiedenen �Sessions� wurden die �Artikel� an zwei �Konferenztagen� präsentiert. Es gab �best paper awards� und einen eingeladenen Gastredner, Herrn Achim Baier von der itemis AG & Co KG, der dankenswerter Weise einen aufschlussreichen Einblick in Projekte mit modellgetriebener Entwicklung in der Praxis gab. Die �best paper awards� wurden an Herrn El-Ghazi und Herrn Rentschler verliehen, denen hiermit nochmal herzlich zu dieser herausragenden Leistung gedankt wird.

Theses

Other