https://opus4.kobv.de/opus4-bambergOPUS documentshttps://opus4.kobv.de/opus4-bamberg/index/index/
Fri, 09 Feb 2018 11:32:39 +0100Fri, 09 Feb 2018 11:32:39 +0100SOCNET 2018 - Proceedings of the “Second International Workshop on Modeling, Analysis, and Management of Social Networks and Their Applications”https://opus4.kobv.de/opus4-bamberg/frontdoor/index/index/docId/51026
Modeling, analysis, control, and management of complex social networks represent an important area of interdisciplinary research in an advanced digitalized world. In the last decade social networks have produced significant online applications which are running on top of a modern Internet infrastructure and have been identified as major driver of the fast growing Internet traffic. The "Second International Workshop on Modeling, Analysis and Management of Social Networks and Their Applications" (SOCNET 2018) held at Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany, on February 28, 2018, has covered related research issues of social networks in modern information society. The Proceedings of SOCNET 2018 highlight the topics of a tutorial on "Network Analysis in Python" complementing the workshop program, present an invited talk "From the Age of Emperors to the Age of Empathy", and summarize the contributions of eight reviewed papers. The covered topics ranged from theoretical oriented studies focusing on the structural inference of topic networks, the modeling of group dynamics, and the analysis of emergency response networks to the application areas of social networks such as social media used in organizations or social network applications and their impact on modern information society. The Proceedings of SOCNET 2018 may stimulate the readers' future research on monitoring, modeling, and analysis of social networks and encourage their development efforts regarding social network applications of the next generation.Oliver Posegga; Peter A. Gloor; Patricia Gouws; Elmarie Kritzinger; Jan Mentz; Fabian Reck; Johannes Putzke; Hideaki Takeda; Dieter Fiems; Tolga Uslu; Alexander Mehler; Andreas Niekler; Kathrin Eismann; Diana Fischer; Kai Fischbach; Lisa Heppconferenceobjecthttps://opus4.kobv.de/opus4-bamberg/frontdoor/index/index/docId/51026Fri, 09 Feb 2018 11:32:39 +0100Effective and Efficient Process Engine Evaluationhttps://opus4.kobv.de/opus4-bamberg/frontdoor/index/index/docId/49633
Business processes have become ubiquitous in industry today. They form the main ingredient of business process management. The two most prominent standardized languages to model business processes are Web Services Business Process Execution Language 2.0 (BPEL) and Business Process Model and Notation 2.0 (BPMN). Business process engines allow for automatic execution of business processes. There is a plethora of business process engines available, and thus, one has the agony of choice: which process engine fits the demands the best? The lack of objective, reproducible, and ascertained information about the quality of such process engines makes rational choices very difficult.
This can lead to baseless and premature decisions that may result in higher long term costs. This work provides an effective and efficient benchmarking solution to reveal the necessary information to allow making rational decisions. The foundation comprises an abstraction layer for process engines that provides a uniform API to interact with any engine similarly and a benchmark language for process engines to represent benchmarks in a concise, self-contained, and interpretable domain-specific language. A benchmark framework for process engines performs benchmarks represented in this language on engines implementing the abstraction layer. The produced benchmark results are visualized and made available for decision makers via a public interactive dashboard. On top of that, the efficient benchmark framework uses virtual machines to improve test isolation and reduce “time to result” by snapshot restoration accepting a management overhead. Based on the gained experience, eight challenges faced in process engine benchmarking are identified, resulting in 21 process engine benchmarking.
Results show that this approach is both effective and efficient. Effective because it covers four BPEL-based and another four BPMN-based benchmarks which cover half of the quality characteristics defined by the ISO/IEC 25010 product quality model. Efficient because it fully automates the benchmarking of process engines and can leverage virtualization for an even higher execution efficiency. With this approach, the barrier for creating good benchmarks is significantly lowered. This allows decision makers to consistently evaluate process engines and, thus, makes rational decisions for the corresponding selection possible.Simon Harrerdoctoralthesishttps://opus4.kobv.de/opus4-bamberg/frontdoor/index/index/docId/49633Fri, 03 Nov 2017 15:43:29 +0100MMBnet 2017 - Proceedings of the 9th GI/ITG Workshop „Leistungs-, Verlässlichkeits- und Zuverlässigkeitsbewertung von Kommunikationsnetzen und Verteilten Systemen“https://opus4.kobv.de/opus4-bamberg/frontdoor/index/index/docId/49762
Nowadays, mathematical methods of systems and network monitoring, modeling, simulation, and performance, dependability and reliability analysis constitute the foundation of quantitative evaluation methods with regard to software-defined next-generation networks and advanced cloud computing systems. Considering the application of the underlying methodologies in engineering practice, these sophisticated techniques provide the basis in many different areas.
The GI/ITG Technical Committee “Measurement, Modelling and Evaluation of Computing Systems“ (MMB) and its members have investigated corresponding research topics and initiated a series of MMB conferences and workshops over the last decades. Its 9th GI/ITG Workshop MMBnet 2017 „Leistungs-, Verlässlichkeits- und Zuverlässigkeitsbewertung von Kommunikationsnetzen und Verteilten Systemen“ was held at Hamburg University of Technology (TUHH), Germany, on September 14, 2017. The proceedings of MMBnet 2017 summarize the contributions of one invited talk and four contributed papers of young researchers. They deal with current research issues in next-generation networks, IP-based real-time communication systems, and new application architectures and intend to stimulate the reader‘s future research in these vital areas of modern information society.Gerhard Haßlinger; Sebastian Surminski; Christian Moldovan; Tobias Hoßfeld; Alexander Beifuß; Jörg Deutschmann; Kai-Steffen Hielscher; Reinhard German; Marcel Großmann; Andreas Keiperbookhttps://opus4.kobv.de/opus4-bamberg/frontdoor/index/index/docId/49762Thu, 24 Aug 2017 15:36:36 +0200Portability of Process-Aware and Service-Oriented Software: Evidence and Metricshttps://opus4.kobv.de/opus4-bamberg/frontdoor/index/index/docId/46252
Modern software systems are becoming increasingly integrated and are required to operate over organizational boundaries through networks. The development of such distributed software systems has been shaped by the orthogonal trends of service-orientation and process-awareness. These trends put an emphasis on technological neutrality, loose coupling, independence from the execution platform, and location transparency. Execution platforms supporting these trends provide context and cross-cutting functionality to applications and are referred to as engines.
Applications and engines interface via language standards. The engine implements a standard. If an application is implemented in conformance to this standard, it can be executed on the engine. A primary motivation for the usage of standards is the portability of applications. Portability, the ability to move software among different execution platforms without the necessity for full or partial reengineering, protects from vendor lock-in and enables application migration to newer engines.
The arrival of cloud computing has made it easy to provision new and scalable execution platforms. To enable easy platform changes, existing international standards for implementing service-oriented and process-aware software name the portability of standardized artifacts as an important goal. Moreover, they provide platform-independent serialization formats that enable the portable implementation of applications. Nevertheless, practice shows that service-oriented and process-aware applications today are limited with respect to their portability. The reason for this is that engines rarely implement a complete standard, but leave out parts or differ in the interpretation of the standard. As a consequence, even applications that claim to be portable by conforming to a standard might not be so.
This thesis contributes to the development of portable service-oriented and process-aware software in two ways: Firstly, it provides evidence for the existence of portability issues and the insufficiency of standards for guaranteeing software portability. Secondly, it derives and validates a novel measurement framework for quantifying portability. We present a methodology for benchmarking the conformance of engines to a language standard and implement it in a fully automated benchmarking tool. Several test suites of conformance tests for two different languages, the Web Services Business Process Execution Language 2.0 and the Business Process Model and Notation 2.0, allow to uncover a variety of standard conformance issues in existing engines. This provides evidence that the standard-based portability of applications is a real issue. Based on these results, this thesis derives a measurement framework for portability. The framework is aligned to the ISO/IEC Systems and software Quality Requirements and Evaluation method, the recent revision of the renowned ISO/IEC software quality model and measurement methodology. This quality model separates the software quality characteristic of portability into the subcharacteristics of installability, adaptability, and replaceability. Each of these characteristics forms one part of the measurement framework. This thesis targets each characteristic with a separate analysis, metrics derivation, evaluation, and validation. We discuss existing metrics from the body of literature and derive new extensions speciffically tailored to the evaluation of service-oriented and process-aware software. Proposed metrics are defined formally and validated theoretically using an informal and a formal validation framework. Furthermore, the computation of the metrics has been prototypically implemented. This implementation is used to evaluate metrics performance in experiments based on large scale software libraries obtained from public open source software repositories.
In summary, this thesis provides evidence that contemporary standards and their implementations are not sufficient for enabling the portability of process-aware and service-oriented applications. Furthermore, it proposes, validates, and practically evaluates a framework for measuring portability.Jörg Lenharddoctoralthesishttps://opus4.kobv.de/opus4-bamberg/frontdoor/index/index/docId/46252Mon, 25 Apr 2016 11:16:19 +0200Entwicklung von Modellen generischer Managementprozesse für die Gestaltung und Lenkung prozessorientierter Unternehmenhttps://opus4.kobv.de/opus4-bamberg/frontdoor/index/index/docId/26637
Ein funktionsspezialisiertes Großunternehmen kann aufgrund seiner starren Struktur und Organisationsträgheit oft nicht schnell und flexibel reagieren. Deshalb sollte ein solches Unternehmen auf Prozessorientierung umgestellt werden, um an das dynamische und komplexe wirtschaftliche Umfeld von heute angepasst zu werden. Für diese Transformation sind Managementprozesse essenziell, welche die Gestaltung und Lenkung prozessorientierter Unternehmen übernehmen und den Kern der Unternehmensführung bilden. In der vorliegenden Arbeit werden diese Managementprozesse untersucht. Ziel ist es dabei, die Transformation eines Großunternehmens hin zur Prozessorientierung mittels generischer Managementprozesse zu unterstützen. Hierfür werden gemäß der SOM-Methodik zwei Referenzmodelle generischer Managementprozesse entwickelt: zum einen das Referenzmodell für die Gestaltung und Lenkung eines operativen Geschäftsprozesses (GPM-RM), zum anderen das Referenzmodell für die Gestaltung und Lenkung der Gesamtleistungserstellung prozessorientierter Unternehmen (RM der Makro-Geschäftsprozess-Führung). Die generischen Managementprozesse bieten den Ansatzpunkt für die Erstellung konkreter Managementprozesse von prozessorientierten Unternehmen und können so zur Transformation in Richtung Prozessorientierung beitragen. Neben dem Beitrag für die Praxis besteht die wissenschaftliche Relevanz dieser Arbeit darin, dass zwei Forschungslücken geschlossen werden können: Einerseits dienen die generischen Managementprozesse der Aufgabensystematisierung im Geschäftsprozessmanagement, andererseits bilden sie den Ausgangspunkt für die systematische Überleitung vom Geschäftsprozessmanagement zu einer prozessorientierten Zielorganisationsstruktur.Li Xiangdoctoralthesishttps://opus4.kobv.de/opus4-bamberg/frontdoor/index/index/docId/26637Wed, 20 Jan 2016 12:00:43 +0100INTERACT 2015 Adjunct Proceedings. 15th IFIP TC.13 International Conference on Human-Computer Interaction 14-18 September 2015, Bamberg, Germanyhttps://opus4.kobv.de/opus4-bamberg/frontdoor/index/index/docId/25644
INTERACT is among the world’s top conferences in Human-Computer Interaction. Starting with the first INTERACT conference in 1990, this conference series has been organised under the aegis of the Technical Committee 13 on Human-Computer Interaction of the UNESCO International Federation for Information Processing (IFIP). This committee aims at developing the science and technology of the interaction between humans and computing devices.
The 15th IFIP TC.13 International Conference on Human-Computer Interaction - INTERACT 2015 took place from 14 to 18 September 2015 in Bamberg, Germany. The theme of INTERACT 2015 was "Connection.Tradition.Innovation". This volume presents the Adjunct Proceedings - it contains the position papers for the students of the Doctoral Consortium as well as the position papers of the participants of the various workshops.Alessio Bellino; Craig Sutherland; Andrew Luxton-Reilly; Beryl Plimmer; Miriam Greis; Daniela Wurhofer; Christina Vasiliou; Stefan Johansson; Sanjay Ghosh; Fiona Dermody; Alistar Sutherland; Margaret Farren; David Swallow; Ticianne Darin; Guy Toko; Ernest Mnkandla; Ibrahim R. Mbaya; Dorrit Billmann; John Archdeachon; Rohit Deshmukh; Michael Feary; Jon Holbrook; Michael Stewartconferenceobjecthttps://opus4.kobv.de/opus4-bamberg/frontdoor/index/index/docId/25644Fri, 11 Sep 2015 11:11:08 +0200Model and Proof Theory of Constructive ALC, Constructive Description Logicshttps://opus4.kobv.de/opus4-bamberg/frontdoor/index/index/docId/26460
Description logics (DLs) represent a widely studied logical formalism with a significant impact in the field of knowledge representation and the Semantic Web. However, they are equipped with a classical descriptive semantics that is characterised by a platonic notion of truth, being insufficiently expressive to deal with evolving and incomplete information, as from data streams or ongoing processes. Such partially determined and incomplete knowledge can be expressed by relying on a constructive semantics. This thesis investigates the model and proof theory of a constructive variant of the basic description logic ALC, called cALC. The semantic dimension of constructive DLs is investigated by replacing the classical binary truth interpretation of ALC with a constructive notion of truth. This semantic characterisation is crucial to represent applications with partial information adequately, and to achieve both consistency under abstraction as well as robustness under refinement, and on the other hand is compatible with the Curry-Howard isomorphism in order to form the cornerstone for a DL-based type theory. The proof theory of cALC is investigated by giving a sound and complete Hilbert-style axiomatisation, a Gentzen-style sequent calculus and a labelled tableau calculus showing finite model property and decidability. Moreover, cALC can be strengthened towards normal intuitionistic modal logics and classical ALC in terms of sound and complete extensions and hereby forms a starting point for the systematic investigation of a constructive correspondence theory.Stephan Scheeledoctoralthesishttps://opus4.kobv.de/opus4-bamberg/frontdoor/index/index/docId/26460Thu, 09 Jul 2015 16:08:10 +0200Konstruktion integrierter Geschäfts-Geschäftsprozessmodelle. Konzeption einer Modellierungsmethodik unter Nutzung hybrider zeitdiskret-zeitkontinuierlicher Simulationssystemehttps://opus4.kobv.de/opus4-bamberg/frontdoor/index/index/docId/25161
Von Aristoteles stammt das Zitat "Das Ganze ist mehr als die Summe seiner Teile". Es steht stellvertretend für die Denkweise der griechischen Philisophie der Antike, die geprägt war vom ganzheitlichen Denken. Im Lauf der Jahrhunderte verlor dieses Denken jedoch seine Bedeutung und wurde spätestens in der Renaissance durch das Denken in Elementen abgelöst. Dieses jedoch stößt überall da an seine Grenzen, wo ein Problem nicht in kurze isolierbare Kausalketten oder in Beziehungen zwischen wenigen Variablen aufgespalten werden kann.
Integrierte Geschäfts-Geschäftsprozessmodelle bestehen aus miteinander gekoppelten Geschäfts- und Geschäftsprozessmodellen, die in Form grafischer Systeme oder in Form von hybriden zeitdiskret-zeitkontinuierlichen Simulationssystemen modelliert werden. Sie können ein Instrument sein, Unternehmungen sowohl ganzheitlich und aggregiert, als auch in ihren Elementen, zu beschreiben, zu analysieren und zu gestalten. Sie bieten Potenziale, zu detaillierten, gleichzeitig aber auch ganzheitlichen Problemlösungen zu gelangen. Diese sind notwendig, da auch Unternehmungen mehr als nur die Summe von Funktionsbereichen und Ressourcen sind. Auch sie sind ein Ganzes.Michael Jacobdoctoralthesishttps://opus4.kobv.de/opus4-bamberg/frontdoor/index/index/docId/25161Fri, 12 Jun 2015 14:38:09 +0200Resource Description and Selection for Similarity Search in Metric Spaces: Problems and Problem-Solving Approacheshttps://opus4.kobv.de/opus4-bamberg/frontdoor/index/index/docId/26046
In times of an ever increasing amount of data and a growing diversity of data types in different application contexts, there is a strong need for large-scale and ﬂexible indexing and search techniques. Metric access methods (MAMs) provide this flexibility, because they only assume that the dissimilarity between two data objects is modeled by a distance metric. Furthermore, scalable solutions can be built with the help of distributed MAMs.
Both IF4MI and RS4MI, which are presented in this thesis, represent metric access methods. IF4MI belongs to the group of centralized MAMs. It is based on an inverted file and thus offers a hybrid access method providing text retrieval capabilities in addition to content-based search in arbitrary metric spaces. In opposition to IF4MI, RS4MI is a distributed MAM based on resource description and selection techniques. Here, data objects are physically distributed. However, RS4MI is by no means restricted to a certain type of distributed information retrieval system. Various application ﬁelds for the resource description and selection techniques are possible, for example in the context of visual analytics. Due to the metric space assumption, possible application fields go far beyond content-based image retrieval applications which provide the example scenario here.Daniel Blankdoctoralthesishttps://opus4.kobv.de/opus4-bamberg/frontdoor/index/index/docId/26046Wed, 10 Jun 2015 14:18:12 +0200Modellgetriebene Validierung von System-Architekturen gegen architekturrelevante Anforderungen. Ein Ansatz zur Validierung mit Hilfe von Simulationenhttps://opus4.kobv.de/opus4-bamberg/frontdoor/index/index/docId/10555
Die Entwicklung von Systemen bestehend aus Hardware und Software ist eine herausfordernde Aufgabe für den System-Architekten. Zum einen muss er die stetig steigende Anzahl an System-Anforderungen, inklusive ihrer Beziehungen untereinander, bei der Erstellung der System-Architektur berücksichtigen, zum anderen muss er mit kürzer werdenden Time-to-Market-Zeiten sowie Anforderungsänderungen durch den Kunden bis in die Implementieungsphase hinein umgehen. Die vorliegende Arbeit stellt einen Prozess vor, der dem Architekten die Validierung der System-Architektur gegenüber den architekturrelevanten Anforderungen ermöglicht. Dieser Prozess ist Bestandteil der System-Design-Phase und lässt sich in die iterative Entwicklung der System-Architektur integrieren. Damit der Architekt nicht den Überblick über alle Anforderungen und deren Beziehungen untereinander verliert, fasst er die Anforderungen anhand von architekturspezifischen
Aspekten, die sogenannten Validierungsziele, zusammen. Für jedes Validierungsziel werden Validierungszielverfahren und Prüfkriterien zur Bestimmung des Validierungsstatus festgelegt. Falls alle betrachteten Validierungsziele
erfüllt sind, d. h. die Ergebnisse der Validierungszielverfahren erfüllen die dazugehörigen Prüfkriterien, erfüllt auch die System-Architektur die dazugehörigen architekturrelevanten Anforderungen. Anstelle von formalen Prüftechniken wie beispielsweise dem Model-Checking bevorzugt der in der Arbeit vorgestelle Ansatz Simulationen als Prüftechnik für die Validierungszielverfahren.
Bei der Dokumentation setzt der Ansatz auf die Modellierungssprache Unified Modeling Language (UML). Alle für die Simulationen erforderlichen Daten sind Bestandteil eines UML-Modells. Für die Konfiguration und Durchführung der Simulationen werden diese Daten aus dem Modell ausgelesen. Auf diese Weise wirken sich Modelländerungen direkt auf das Validierungsergebnis der System-Architektur aus. Der in der Arbeit vorgestellte Prozess unterstützt den Architekten bei der Erstellung einer den architekturrelevanten Anforderungen entsprechenden System-Architektur sowie bei der Auswirkungsanalyse von Anforderungs- oder Architekturänderungen.
Die wesentlichen Prozessschritte werden mit Hilfe eines Werkzeugs partiell automatisiert, wodurch die Arbeit des Architekten erleichtert und die Effizienz des Systementwicklungsprozesses verbessert wird.André Pflügerdoctoralthesishttps://opus4.kobv.de/opus4-bamberg/frontdoor/index/index/docId/10555Thu, 18 Dec 2014 15:21:33 +0100Geografische Empfehlungssystemehttps://opus4.kobv.de/opus4-bamberg/frontdoor/index/index/docId/6085
Mobile Geräte werden immer häufiger mit Sensoren zur Bestimmung der eigenen Position ausgestattet, zum Beispiel mit GPS. Mit Hilfe der Ortsinformationen dieser Sensoren können beispielsweise moderne Bildmanagementanwendungen digitale Fotos automatisch nach geografischen Regionen gruppieren oder passende Schlagworte generieren. Dies führt unter anderem zu einer besseren Suchbarkeit dieser digitalen Daten.
Grundsätzlich geben Ortsinformationen in digitalen Fotos nicht nur Hinweise auf das Foto selbst, sondern machen auch sichtbar, welche geografischen Entscheidungen der Fotograf bei deren Erstellung getroffen hat. Diese Arbeit nutzt diese Entscheidungen für die Berechnung von weiteren Empfehlungen für den Nutzer, beispielsweise einer Bildmanagementanwendung. Ein konkreter Anwendungsfall lautet folgendermaßen: Einem Nutzer sollen für eine frei wählbare geographische Region (z.B. einer Stadt), mehrere Bilder empfohlen werden, die zum einen typisch für diese Region sind, zum anderen aber auch für ihn persönlich interessant sein könnten. Um diese geografischen Mehr-Objekt-Empfehlungen zu berechnen, wurde ein neuartiger Algorithmus entwickelt, der zunächst die Ortsinformationen aller Nutzer zu einem geografischen Modell bündelt. Auf Grundlage dieser prototypischen Konzeptualisierung von einzelnen Regionen, können dann typische Bilder empfohlen werden. Weiterhin werden diese geografischen Modelle in einem zweiten Schritt für die zusätzliche Gewichtung der einzelnen geografischen Entscheidungen der Nutzer verwendet, um über den Ansatz eines kollaborativen Filters zu einer persönlichen Empfehlung zu gelangen. Dazu wurden mehrere Verfahren entwickelt und miteinander verglichen.
Diese Arbeit ist im Rahmen des europäischen Projektes Tripod entstanden, für das der entwickelte geografische Empfehlungsalgorithmus als Softwaremodul prototypisch implementiert wurde. Damit wurden die Empfehlungen mit Hilfe von georeferenzierten Bildern evaluiert, die auf den Online-Galerien Panoramio.com und Flickr.de veröffentlicht wurden. Durch die Auswertung der geografischen Informationen und der daraus berechneten Ortsmodelle, ließen sich deutlich präzisere Empfehlungen vorschlagen, als mit anderen bekannten Empfehlungsverfahren.Christian Matyasdoctoralthesishttps://opus4.kobv.de/opus4-bamberg/frontdoor/index/index/docId/6085Thu, 15 May 2014 15:36:29 +0200Information Management for Digital Learners : Introduction, Challenges, and Concepts of Personal Information Management for Individual Learnershttps://opus4.kobv.de/opus4-bamberg/frontdoor/index/index/docId/6470
The current cultural transition of our society into a digital society influences all aspects of human life. New technologies like the Internet and mobile devices enable an unobstructed access to knowledge in worldwide networks. These advancements bring with them a great freedom in decisions and actions of individuals but also a growing demand for an appropriate mastering of this freedom of choice and the amount of knowledge that has become available today. Naturally, this observable rise and progress of new technologies—gently but emphatically becoming part of people’s everyday lives—not only changes the way people work, communicate, and shape their leisure but also the way people learn.
This thesis is dedicated to an examination of how learners can meet these requirements with the support that modern technology is able to provide to learners. More precisely, this thesis places a particular emphasis that is absent from previous work in the field and thus makes it distinctive: the explicit focus on individual learners. As a result, the main concern of this thesis can be described as the examination, development, and implementation of personal information management in learning. Altogether two different steps towards a solution have been chosen: the development of a theoretical framework and its practical implementation into a comprehensive concept.
To establish a theoretical framework for personal information management in learning, the spheres of learning, e-learning, and personalised learning have been combined with theories of organisational and personal knowledge management to form a so far unique holistic view of personal information management in learning. The development of this framework involves the identification of characteristics, needs, and challenges that distinguish individual learners from within the larger crowd of uniform learners.
The theoretical framework defined within the first part is transferred to a comprehensive technical concept for personal information management in learning. The realisation and design of this concept as well as its practical implementation are strongly characterised by the utilisation of information retrieval techniques to support individual learners. The characteristic feature of the resulting system is a flexible architecture that enables the unified acquisition, representation, and organisation of information related to an individual’s learning and supports an improved find-ability of personal information across all relevant sources of information.
The most important results of this thesis have been validated by a comparison with current projects in related areas and within a user study.Stefanie Gooren-Sieberdoctoralthesishttps://opus4.kobv.de/opus4-bamberg/frontdoor/index/index/docId/6470Thu, 15 May 2014 14:25:18 +0200MMB & DFT 2014 : Proceedings of the International Workshops ; Modeling, Analysis and Management of Social Networks and their Applications (SOCNET 2014) & Demand Modeling and Quantitative Analysis of Future Generation Energy Networks and Energy-Efficient Systems (FGENET 2014)https://opus4.kobv.de/opus4-bamberg/frontdoor/index/index/docId/6486
At present, a comprehensive set of measurement, modeling, analysis, simulation, and performance evaluation techniques are employed to investigate complex networks. A direct transfer of the developed engineering methodologies to related analysis and design tasks in next-generation energy networks, energy-efficient systems and social networks is enabled by a common mathematical foundation.
The International Workshop on "Demand Modeling and Quantitative Analysis of Future Generation Energy Networks and Energy-Efficient Systems" (FGENET 2014) and the International Workshop on "Modeling, Analysis and Management of Social Networks and their Applications" (SOCNET 2014) were held on March 19, 2014, at University of Bamberg in Germany as satellite symposia of the 17th International GI/ITG Conference on "Measurement, Modelling and Evaluation of Computing Systems" and "Dependability and Fault-Tolerance" (MMB & DFT 2014). They dealt with current research issues in next-generation energy networks, smart grid communication architectures, energy-efficient systems, social networks and social media. The Proceedings of MMB & DFT 2014 International Workshops summarizes the contributions of 3 invited talks and 13 reviewed papers and intends to stimulate the readers’ future research in these vital areas of modern information societies.Konstantin Avrachenkov; Peter Bazan; Cristian Bisconti; Ulrik Brandes; Didier Colle; Angelo Corallo; Koen De Turck; Piet Demeester; Raphael Duboz; Kolja Eger; Ullrich Feuchtinger; Dieter Fiems; Laura Fortunato; Reinhard Frank; Antonio A. Gentile; Reinhard German; Peter A. Gloor; Jörn Grahl; Boudewijn R. Haverkort; Florian Heimgärtner; Sebastian Herrmann; Debra Hevenstone; Michael Höfling; Sofie Lambert; Bart Lannoo; Benjamin Litfinski; Alexander Martin; Michael Menth; Mehwish Nasim; Mario Pickavet; Björn Postema; Balakrishna J. Prabhu; Marco Pruckner; Johannes Riedl; Franz Rothlauf; David Schoch; Thorsten Staake; Hideaki Takeda; Christoph Thurner; Mohan Timilsina; Ward Van Heddeghem; Willem Vereecken; Jürgen Wenig; Katharina A. Zweigconferenceobjecthttps://opus4.kobv.de/opus4-bamberg/frontdoor/index/index/docId/6486Tue, 25 Mar 2014 12:04:44 +0100Proceedings of KogWis 2012. 11th Biannual Conference of the German Cognitive Science Societyhttps://opus4.kobv.de/opus4-bamberg/frontdoor/index/index/docId/690
The German cognitive science conference is an interdisciplinary event where researchers from different disciplines -- mainly from artificial intelligence, cognitive psychology, linguistics, neuroscience, philosophy of mind, and anthropology -- and application areas -- such as eduction, clinical psychology, and human-machine interaction -- bring together different theoretical and methodological perspectives to study the mind. The 11th Biannual Conference of the German Cognitive Science Society took place from September 30 to October 3 2012 at Otto-Friedrich-Universität in Bamberg. The proceedings cover all contributions to this conference, that is, five invited talks, seven invited symposia and two symposia, a satellite symposium, a doctoral symposium, three tutorials, 46 abstracts of talks and 23 poster abstracts.Dietrich Dörner; Rainer Goebel; Mike Oaksford; Michael Pauen; Elsbeth Sternconferenceobjecthttps://opus4.kobv.de/opus4-bamberg/frontdoor/index/index/docId/690Fri, 21 Sep 2012 09:09:17 +0200The CHORCH Approach: How to Model B2Bi Choreographies for Orchestration Executionhttps://opus4.kobv.de/opus4-bamberg/frontdoor/index/index/docId/392
The establishment and implementation of cross-organizational business processes is an implication of today's market pressure for efficiency gains.
In this context, Business-To-Business integration (B2Bi) focuses on the information integration aspects of business processes.
A core task of B2Bi is providing adequate models that capture the message exchanges between integration partners.
Following the terminology used in the SOA domain, such models will be called choreographies in the context of this work.
Despite the enormous economic importance of B2Bi, existing choreography languages fall short of fulfilling all relevant
requirements of B2Bi scenarios.
Dedicated B2Bi choreography standards allow for inconsistent outcomes of basic interactions and
do not provide unambiguous semantics for advanced interaction models.
In contrast to this, more formal or technical choreography languages may provide unambiguous modeling semantics,
but do not offer B2Bi domain concepts or an adequate level of abstraction.
Defining valid and complete B2Bi choreography models becomes a challenging task in the face of these shortcomings.
At the same time, invalid or underspecified choreography definitions are particularly costly considering the organizational
setting of B2Bi scenarios.
Models are not only needed to bridge the typical gap between business and IT,
but also as negotiation means among the business users of the integration partners on the one hand
and among the IT experts of the integration partners on the other.
Misunderstandings between any two negotiation partners potentially affect the agreements between all other negotiation partners.
The CHORCH approach offers tailored support for B2Bi by combining the strengths of both dedicated B2Bi standards and formal rigor.
As choreography specification format, the ebXML Business Process Specification Schema (ebBP) standard is used.
ebBP provides dedicated B2Bi domain concepts such as so-called BusinessTransactions (BTs) that abstractly specify the exchange of a request business document
and an optional response business document.
In addition, ebBP provides a format for specifying the sequence of BT executions for capturing complex interaction scenarios.
CHORCH improves the offering of ebBP in several ways.
Firstly, the execution model of BTs which allows for inconsistent outcomes among the integration partners
is redefined such that only consistent outcomes are possible.
Secondly, two binary choreography styles are defined as B2Bi implementation contract format in order to streamline implementation projects.
Both choreography styles are formalized and provided with a formal execution semantics for ensuring unambiguity.
In addition, validity criteria are defined that ensure implementability using BPEL-based orchestrations.
Thirdly, the analysis of the synchronization dependencies of complex B2Bi scenarios is supported
by means of a multi-party choreography style combined with an analysis framework.
This choreography style also is formalized and standard state machine semantics are reused in order to ensure unambiguity.
Moreover, validity criteria are defined that allow for analyzing corresponding models for typical multi-party choreography issues.
Altogether, CHORCH provides choreography styles that are B2Bi adequate, simple, unambiguous, and implementable.
The choreography styles are B2Bi adequate in providing B2Bi domain concepts, in abstracting from low-level implementation details
and in covering the majority of real-world B2Bi scenarios.
Simplicity is fostered by using state machines as underlying specification paradigm.
This allows for thinking in the states of a B2Bi scenario and for simple control flow structures.
Unambiguity is provided by formal execution semantics whereas implementability (for the binary choreography styles) is ensured by providing
mapping rules to BPEL-based implementations.
The validation of CHORCH's choreography styles is performed in a twofold way.
Firstly, the implementation of the binary choreography styles based on Web Services and BPEL technology is demonstrated
which proves implementability using relatively low-cost technologies.
Moreover, the analysis algorithms for the multi-party choreography styles are validated using a Java-based prototype.
Secondly, an abstract visualization of the choreography styles based on BPMN is provided that abstracts from
the technicalities of the ebBP standard.
This proves the amenability of CHORCH to development methods that start out with visual models.
CHORCH defines how to use BPMN choreographies for the purpose of B2Bi choreography modeling
and translates the formal rules for choreography validity into simple composition rules that
demonstrate valid ways of connecting the respective modeling constructs.
In summary, CHORCH allows integration partners to start out with a high-level visual model of their interactions in BPMN
that identifies the types and sequences of the BusinessTransactions to be used.
For multi-party choreographies, a framework for analyzing synchronization dependencies then is available.
For binary choreographies, an ebBP refinement can be derived that fills in the technical parameters that are needed for deriving the implementation.
Finally, Web Services and BPEL based implementations can be generated.
Thus, CHORCH allows for stepwise closing the semantic gap between the information perspective of business process models
and the corresponding implementations.
It is noteworthy that CHORCH uses international standards throughout all relevant layers, i.e., BPMN, ebBP, Web Services and BPEL,
which helps in bridging the heterogeneous IT landscapes of B2Bi partners.
In addition, the adoption of core CHORCH deliverables as international standards of the RosettaNet community
give testament to the practical relevance and promise dissemination throughout the B2Bi community.Andreas Schönbergerdoctoralthesishttps://opus4.kobv.de/opus4-bamberg/frontdoor/index/index/docId/392Fri, 13 Jul 2012 14:05:53 +0200Facettenbasierte Indexierung multipler Artefakte - Ein Framework für vage Anfragen in der Produktentwicklunghttps://opus4.kobv.de/opus4-bamberg/frontdoor/index/index/docId/361
Durch den zunehmenden Einsatz von Informations- und Kommunikationstechnologien sowie den schnellen Technologiefortschritt steht die Entwicklung technischer Produkte vor immer neuen Herausforderungen. Dabei ist die Aufgabe der Produktentwicklung selbst als Problemlösungsprozess zu betrachten, in dem Lösungen mittels intensiver Informationsverarbeitung gefunden werden. Somit werden täglich unterschiedlichste Arten von Informationen erstellt, benötigt und verarbeitet, die primär in digitaler Form vorliegen. Diese werden in heterogenen Anwendungssystemen verwaltet, was eine Wiederverwendung bereits existierender Informationen erschwert. Damit beansprucht die Suche nach Informationen noch immer einen erheblichen Anteil der Entwicklungszeit.
Zur Verbesserung der Informationsversorgung im Bereich der technischen Produktentwicklung wird ein interaktives Information Retrieval-System – das LFRP-Framework – vorgestellt. Dieses kombiniert die vier Basiskonzepte der multiplen Ebenen, der facettierten Suche, des Rankings und der parallelen Koordinaten, um hochkomplexe Informationsbedürfnisse zu befriedigen. Seine Realisierung erfordert neben einer geeigneten Suchoberfläche die Entwicklung einer Indexierungskomponente, welche die vorhandenen Informationen in eine für das LFRP-Framework rechnerverarbeitbare Form transformiert. Dieser als Indexierung bezeichnete Prozess stellt die Grundvoraussetzung für die Funktionsfähigkeit eines Suchsystems dar und liegt daher im Fokus der Betrachtung. Es wird ein Lösungsansatz vorgestellt, welcher eine Indexierung in Form facettenbasierter Suchkriterien ermöglicht und dabei nicht nur Informationen aus heterogenen Anwendungssystemen, sondern insbesondere aus entwicklungsspezifischen Dokumenten, wie CAD-Modellen, technischen Zeichnungen oder Stücklisten, berücksichtigt.Nadine Weberdoctoralthesishttps://opus4.kobv.de/opus4-bamberg/frontdoor/index/index/docId/361Fri, 15 Jun 2012 11:04:44 +0200Interactive Search Processes in Complex Work Situations - A Retrieval Frameworkhttps://opus4.kobv.de/opus4-bamberg/frontdoor/index/index/docId/292
In recent years a steady increase of information produced in organizations can be noticed. In order to stay competitive, companies have a growing interest in reusing existing knowledge from past projects. Furthermore, a complete picture of the available information is necessary to be able to make informed decisions. The variety and complexity of information in modern organizations often exceeds the capabilities of the currently deployed enterprise search solutions. The reasons for that are manifold and range from non-linked information from multiple software systems to missing functionality to support users during search tasks. Existing search engines often do not support the search paradigms necessary in these environments. On many occasions, users are not aware of the results they will find during the formulation of the search queries. Additionally, the aspect of knowledge building and the identification of new insights into the available data is a priority for the users. Therefore, search paradigms are useful to provide users with tools that support exploratory navigation in a data set and help them to recognize relationships between search results. The goal of this publication is the introduction of a framework that supports exploratory searches in an organizational setting. The described LFRP-framework is built on top of four pillars. 1. The multi-layer functionality allows users to formulate complex search queries referring to more than one result type. Therewith, it enables search queries that - starting from a set of relevant projects - allow selections of documents that are linked to these projects. 2. The search paradigm of faceted searching supports users in formulating search queries incrementally by offering dynamic and valid filter criteria that avoid empty result sets. 3. By combining the concept of faceted search with the capability to influence the search result order based on filter criteria, users can define in a fine-grained way which criteria values shall be weighted stronger or weaker in the search results. The interaction with the ranking is conducted transparently by the so-called user preference functions. 4. The last pillar consists of the visualization type of parallel coordinates covering two tasks in the search user interface of the LFRP-Framework. On the one hand, users formulate their search queries solely graphically in the parallel coordinates and on the other hand they obtain a visual representation of the search results and are able to discover relationships between search results and their facets. The framework is introduced formally from a query model point of view as well as a prototypical implementation. It enables users to access large linked data sets by navigation and constitutes a contribution to a comprehensive information strategy for organizations.Raiko Ecksteindoctoralthesishttps://opus4.kobv.de/opus4-bamberg/frontdoor/index/index/docId/292Tue, 20 Sep 2011 15:43:21 +0200A Service Description Method for Service Ecosystems - Meta Models, Modeling Notations, and Model Transformationshttps://opus4.kobv.de/opus4-bamberg/frontdoor/index/index/docId/288
Globalization and rapid technological change elevates the role of the Internet in terms of business service offering and procurement. At the same time, companies specialize on core competencies on the one hand, and on the other hand, integrate with other firms into “service ecosystems” in order to serve market needs in a flexible manner. One challenge in this setting is how to develop and describe novel business services within service ecosystems for efficient trade in services over the Internet. This work proposes a method for describing business services that integrates into business service development processes. The development of such a method leads to three major challenges: Firstly, it is necessary to determine which properties are appropriate for describing business services. This work analyzes existing approaches in the marketing, information systems, and computer science domain and develops a model for a formal description that facilitates offering and discovering of business services. Secondly, business service description elicitation, documentation, and communication must be provided for the whole business service development process. This work’s approach includes the development of an appropriate modeling notation as an extension of the Unified Modeling Notation (UML). Thirdly, there is a need for transforming business service descriptions into software realization languages that are suitable for the Internet. This contribution offers an automatic transformation of business service descriptions into Web Services Description Language (WSDL) documents using model-to-model transformation scripts. The method for describing business services was evaluated by implementing an integrated modeling environment along with related transformation scripts as well as by two case studies in the insurance and IT outsourcing industry.Gregor Scheithauerdoctoralthesishttps://opus4.kobv.de/opus4-bamberg/frontdoor/index/index/docId/288Fri, 15 Jul 2011 07:56:25 +0200Gemeinschaftliche Qualitätsgesicherte Erhebung und Semantische Integration von Raumbezogenen Datenhttps://opus4.kobv.de/opus4-bamberg/frontdoor/index/index/docId/285
In den vergangenen Jahren ist die Verbreitung von mobilen Geräten mit integrierter Lokalisierungstechnologie (z.B. GPS) stark gestiegen. Jeder, der ein solches Gerät besitzt, kann zum Datenlieferanten werden und in einem Netzwerk von Freiwilligen, die Rolle eines Datensensors übernehmen. In der vorliegenden Dissertation werde ich zwei Problemstellungen näher betrachten, die im Kontext von raumbezogenen Daten, erhoben von Gemeinschaften aus fachlich (hier: geowissenschaftlich) nicht geschulten Nutzern, noch ungelöst sind: Wie können geowissenschaftliche Laien motiviert werden, freiwillig qualitätsgesicherte raumbezogene Daten zu erheben? Der erste Beitrag meiner Dissertation liefert eine Antwort auf diese Frage. Er beinhaltet ein Framework zum Design von ortsbezogenen Spielen, die geowissenschaftliche Laien motivieren, raumbezogene Daten qualitätsgesichert in spielerischer Art und Weise zu sammeln. Um es mit den Worten von Peltola et al. (2006) auszudrücken: ”game play is a natural motivator to participate in something that is not necessary or beneficial. [...] By controlling game events and perhaps game logics and rules, the agencies that ultimately use the gathered data, can steer players to do tasks supporting their needs“. Wie kann die Qualität von Sammlungen semantisch angereicherter raumbezogener Daten verbessert werden? Eine wichtige Eigenschaft von ortsbezogenen Spielen zur Datenerhebung ist die wiederholte Spielbarkeit. Im Gegensatz zu anderen Domänen ist nämlich die Erstellung redundanter Daten im raumbezogenen Kontext sogar erwünscht. Das Zusammenführen von Daten mehrerer Nutzer kann man als Mehrfachmessung auffassen, die sich dazu nutzen lässt, die Qualität von Lokalisierung (Wo?) und Kategorisierung (Was?) zu verbessern. Der zweite Beitrag meiner Dissertation gibt eine Antwort auf die Fragestellung und besteht in einem Ansatz zur semantischen Integration der gesammelten raumbezogenen Daten.Sebastian Matyasdoctoralthesishttps://opus4.kobv.de/opus4-bamberg/frontdoor/index/index/docId/285Tue, 05 Jul 2011 13:56:18 +0200Dienstorientierte IT-Systeme für hochflexible Geschäftsprozessehttps://opus4.kobv.de/opus4-bamberg/frontdoor/index/index/docId/282
Der vorliegende Band „Dienstorientierte IT-Systeme für hochflexible Geschäftsprozesse“ enthält ausgewählte Ergebnisse des Forschungsverbundes forFLEX aus den Jahren 2008 - 2011. Ausgehend von einer Charakterisierung des Forschungsfeldes und zwei fallstudienbasierten Anwendungsszenarien werden Fragen der Analyse, der Modellierung und Gestaltung sowie der Infrastruktur, Sicherheit und Werkzeugunterstützung von hochflexiblen Geschäftsprozessen und ihrer Unterstützung durch dienstorientierte IT-Systeme untersucht. Das Buch wendet sich an IT-Fach- und Führungskräfte in Wirtschaft und Verwaltung sowie an Wissenschaftler, die an der Analyse und Gestaltung von Flexibilitätspotenzialen (teil-) automatisierter Geschäftsprozesse interessiert sind.Dieter Bartmann; Freimut Bodendorf; Domenik Bork; Sebastian Duschinger; Otto K. Ferstl; Jochen Frank; Christian Herrmann; Andree Krücke; Matthias Kurz; Benjamin Leunig; Karl Mühlbauer; Corinna Pütz; Christian Senk; Elmar J. Sinz; Christian Suchan; Daniel Wagner; Stephan Weberbookhttps://opus4.kobv.de/opus4-bamberg/frontdoor/index/index/docId/282Thu, 16 Jun 2011 10:51:31 +0200Entwicklung und Überprüfung von Kausalhypothesen: Gestaltungsoptionen für einen Analyseprozess zur Fundierung betrieblicher Ziel- und Kennzahlensysteme durch Kausalhypothesen am Beispiel des Performance-Managementshttps://opus4.kobv.de/opus4-bamberg/frontdoor/index/index/docId/263
Viele Unternehmen setzen moderne Performance-Management-Konzepte, wie zum Beispiel die Balanced Scorecard, ein. Hierbei werden Ursache-Wirkungs-Vermutungen aufgestellt, um nicht-finanzielle Kennzahlen zu identifizieren und deren Einfluss auf nachgelagerte Finanzkennzahlen abzubilden. Darüber hinaus werden kausale Abhängigkeiten zwischen Maßnahmen und Zielen in Form von Zweck-Mittel-Beziehungen unterstellt. Die hierfür benötigten Kausalhypothesen werden aber nicht systematisch, sondern meist assoziativ und allein auf Basis von Intuition entwickelt. Man verlässt sich auf vage Vermutungen und hypothetische Zusammenhänge, ohne diese zu überprüfen. Dies birgt die Gefahr einer Fehlsteuerung, indem nutzlose, konfliktäre oder sogar schädliche Maßnahmen aus den nur unzureichend begründeten Ziel- und Kennzahlbeziehungen abgeleitet werden. Es stellt sich daher die Frage, wie Unternehmen im Performance-Management ein anderer, systematischerer Umgang mit Ursache-Wirkungs-Beziehungen gelingen kann. Auf welchen Wegen können die benötigten Kausalhypothesen entwickelt und überprüft werden? Um diese Fragen zu beantworten, entwirft diese Arbeit einen generischen Performance-Management-Prozess, der Kausalhypothesen als zentrales Mittel zur ganzheitlichen Gestaltung und Lenkung der betrieblichen Performance nutzt. Daran anknüpfend werden Gestaltungsoptionen für einen Analyseprozess ausgearbeitet, der Kausalhypothesen evidenzbasiert entwickelt und überprüft. Der wesentliche Beitrag dieser Arbeit besteht darin, dass neben der Datenanalyse ein zweiter - und bis dato noch unbeschrittener - Weg zur Kausalanalyse vorgestellt wird: die modellzentrierte Kausalanalyse. Welche Synergien sich aus der Kombination modell- und datenzentrierter Analyseverfahren ergeben, insbesondere mit den Verfahren des On-Line-Analytical-Processing (OLAP) und Data-Minings, wird empirisch am Beispiel eines Sportartikelherstellers gezeigt.Thomas Voitdoctoralthesishttps://opus4.kobv.de/opus4-bamberg/frontdoor/index/index/docId/263Wed, 12 Jan 2011 10:58:24 +0100Refactoring of Security Antipatterns in Distributed Java Componentshttps://opus4.kobv.de/opus4-bamberg/frontdoor/index/index/docId/216
The importance of JAVA as a programming and execution environment has grown steadily over the past decade. Furthermore, the IT industry has adapted JAVA as a major building block for the creation of new middleware as well as a technology facilitating the migration of existing applications towards web-driven environments. Parallel in time, the role of security in distributed environments has gained attention, as a large amount of middleware applications has replaced enterprise-level mainframe systems. The protection of confidentiality, integrity and availability are therefore critical for the market success of a product. The vulnerability level of every product is determined by the weakest embedded component, and selling vulnerable products can cause enormous economic damage to software vendors. An important goal of this work is to create the awareness that the usage of a programming language, which is designed as being secure, is not sufficient to create secure and trustworthy distributed applications. Moreover, the incorporation of the threat model of the programming language improves the risk analysis by allowing a better definition of the attack surface of the application. The evolution of a programming language leads towards common patterns for solutions for recurring quality aspects. Suboptimal solutions, also known as ´antipatterns´, are typical causes for quality weaknesses such as security vulnerabilities. Moreover, the exposure to a specific environment is an important parameter for threat analysis, as code considered secure in a specific scenario can cause unexpected risks when switching the environment. Antipatterns are a well-established means on the abstractional level of system modeling to inform about the effects of incomplete solutions, which are also important in the later stages of the software development process. Especially on the implementation level, we see a deficit of helpful examples, that would give programmers a better and holistic understanding. In our basic assumption, we link the missing experience of programmers regarding the security properties of patterns within their code to the creation of software vulnerabilities. Traditional software development models focus on security properties only on the meta layer. To transfer these efficiently to the practical level, we provide a three-stage approach: First, we focus on typical security problems within JAVA applications, and develop a standardized catalogue of ´antipatterns´ with examples from standard software products. Detecting and avoiding these antipatterns positively influences software quality. We therefore focus, as second element of our methodology, on possible enhancements to common models for the software development process. These help to control and identify the occurrence of antipatterns during development activities, i. e. during the coding phase and during the phase of component assembly, integrating one´s own and third party code. Within the third part, and emphasizing the practical focus of this research, we implement prototypical tools for support of the software development phase. The practical findings of this research helped to enhance the security of the standard JAVA platforms and JEE frameworks. We verified the relevance of our methods and tools by applying these to standard software products leading to a measurable reduction of vulnerabilities and an information exchange with middleware vendors (Sun Microsystems, JBoss) targeting runtime security. Our goal is to enable software architects and software developers developing end-user applications to apply our findings with embedded standard components on their environments. From a high-level perspective, software architects profit from this work through the projection of the quality-of-service goals to protection details. This supports their task of deriving security requirements when selecting standard components. In order to give implementation-near practitioners a helpful starting point to benefit from our research we provide tools and case-studies to achieve security improvements within their own code base.Marc Schönefelddoctoralthesishttps://opus4.kobv.de/opus4-bamberg/frontdoor/index/index/docId/216Wed, 19 May 2010 15:48:36 +0200Modellbasierte Analyse von Führungsinformationssystemen: Ein Ansatz zur Bewertung auf der Grundlage betrieblicher Planungs- und Lenkungsprozessehttps://opus4.kobv.de/opus4-bamberg/frontdoor/index/index/docId/214
Die Komplexität von Managementaufgaben steigt infolge zunehmender Einflüsse aus der Unternehmensumwelt und hieraus resultierenden innerbetrieblichen Zustandsänderungen. Um über eine zur Komplexitätsbeherrschung ausreichende Lenkungsvarietät verfügen zu können, wächst dementsprechend der qualitäts- und mengenbezogene Informationsbedarf des Managements. Führungsinformationssysteme unterstützen die Bereitstellung managementrelevanter Informationen, setzen allerdings voraus, dass für die Gestaltung dieser Systeme durchgängig betriebliche Geschäftsprozesse zugrunde gelegt werden. In der vorliegenden Arbeit wird die zuvor beschriebene Anforderung aufgegriffen und zunächst ein Vorgehen zur Transformation der Methodik der modellbasierten Geschäftsprozessuntersuchung von der Ebene der betrieblichen Leistungserstellung auf die Ebene der Planungs- und Lenkungsprozesse entwickelt. Durch dieses Vorgehen wird ermöglicht, Aufgaben der strategischen Lenkungsebene strukturieren und analysieren zu können. Weiterer Gegenstand der Untersuchung in der Arbeit ist die Bewertung des Führungsinformationssystems SAP SEM BPS/CPM anhand von Branchen-Referenzmodellen und Geschäftsprozessmodellen. Auf der Grundlage von Beobachtungsergebnissen aus der Branche ´Öffentliche Versorgung & Infrastruktur´werden anhand von Referenzmodellen Gestaltungspotenziale der Software SAP SEM BPS/CPM abgeleitet. Im Rahmen der praxisorientierten Untersuchung eines Fallstudienunternehmens wird zudem der Anwendungsumfang von SAP SEM hinsichtlich der Faktoren Kosten, Zeit und Qualität evaluiert. Zur Analyse des Anwendungsumfangs werden Geschäftsprozessmodelle genutzt, deren Konstruktion auf der Methodik des Semantischen Objektmodells (SOM) basiert.Alexander Bachdoctoralthesishttps://opus4.kobv.de/opus4-bamberg/frontdoor/index/index/docId/214Mon, 10 May 2010 11:14:28 +0200A Realistic Approach for the Autonomic Management of Component-Based Enterprise Systemshttps://opus4.kobv.de/opus4-bamberg/frontdoor/index/index/docId/186
During the last decades, information technology has been characterized by constantly increasing performance of available hardware resources. This development allows the assignment of more and more complex tasks to software systems while at the same time leading to a massive increase of inherent complexity of applied systems. The expected further increase of complexity in the future demands for an explicit addressing of complexity. The concept of Component Orientation represents an approach for complexity reduction during the development and configuration of software through functional decomposition. With the vision of Autonomic Computing there does exist an approach for addressing complexity during the operation and maintenance of software systems. In this context, the approach is based on the idea of assigning low level-management tasks to the managed system itself. The concept of Component Orientation leads to the establishment of system architectures out of clearly distinguishable building blocks. Therefore, Component Orientation seems to provide a promising foundation for realizing the vision of Autonomic Computing. This thesis presents a realistic infrastructure for the autonomic management of component-based enterprise systems. The application area of such systems leads to special requirements for managed systems and is highly affected by the complexity problem. As a foundation for the proposed approach, a well established component standard was chosen to guarantee the practical relevance of applied concepts and techniques. The applied standard is Enterprise JavaBeans, version 3.0. The proposed infrastructure is designed and realized in a generic fashion. It provides a platform upon which solutions for different application areas of Autonomic Computing can be realized. Autonomic entities are supported through a programming interface which represents a system on three interrelated levels and allows its management: On the top-level, the underlying software of a managed system is considered. The middle layer addresses the system architecture. Runtime interactions within the system are represented on the lowest layer. On this foundation, a system can be managed in a holistic, model-based way. The runtime management of a system is enabled through a specially developed component which must be integrated into the affected environment. This component is compliant with the applied component standard and does not require any adjustment of the underlying component platform. Finally, a tool is provided which supports the establishment of manageability through the automated execution of required adjustments of components. The management of a system is realized transparently for its constituent elements during runtime. On the whole, the development of enterprise software is not affected by a potential application of the presented infrastructure.Jens Bruhndoctoralthesishttps://opus4.kobv.de/opus4-bamberg/frontdoor/index/index/docId/186Thu, 08 Oct 2009 10:35:07 +0200Privacy-enhancing Technologies for Private Serviceshttps://opus4.kobv.de/opus4-bamberg/frontdoor/index/index/docId/170
Privacy on the Internet is becoming more and more important, as an increasing part of everyday life takes place over the Internet. Internet users lose the ability to control which information they give away about themselves or are even not aware that they do so. Privacy-enhancing technologies help control private information on the Internet, for example, by anonymizing Internet communication. Up to now, work on privacy-enhancing technologies has mainly focused on privacy of users requesting public services. This thesis introduces a new privacy risk that occurs when private persons run their own services. One example are instant messaging systems which allow users to exchange presence information and text messages in real time. These systems usually do not provide protection of presence information which is stored on central servers. As an alternative, decentralized instant messaging system designs mitigate this problem by having private persons provide instant messaging services to each other. However, providing a service as a private person causes new security problems as compared to providing a service as an organization or enterprise: First, the presence of such a service reveals information about the availability of the service provider. Second, the server location needs to be concealed in order to hide the whereabouts of a person. Third, the server needs to be specifically protected from unauthorized access attempts. This thesis proposes to use pseudonymous services as a building block for private services. Pseudonymous services conceal the location of a server that provides a specific service. The contribution made here is to analyze what parts of pseudonymous services, in particular Tor hidden services, are missing in order to apply them for private services. This analysis leads to three main problems for which solutions are proposed: First, known pseudonymous service designs do not scale to the expected number of private services which might be provided in the future. This thesis proposes a new approach to store hidden service descriptors in a distributed data structure rather than on central servers. A particular focus lies on the support of private entries which are required for private services. Second, pseudonymous services leak too much information about service identity during advertisement in the network and connection establishment by clients. The approach taken in this thesis is to reduce the information that a service publishes in the network to a minimum and prevent unauthorized clients from accessing a service already during connection establishment. These changes protect service activity and usage patterns from non-authorized entities. Third, pseudonymous services exhibit worse performance than direct service access. The contribution of this thesis is to measure performance, identify possible problems, and propose improvements.Karsten Loesingdoctoralthesishttps://opus4.kobv.de/opus4-bamberg/frontdoor/index/index/docId/170Wed, 27 May 2009 15:06:07 +0200