Rigidness of the Internet causes its architectural design issues such as interdependencies among the layers, no cross-layer information exchange, and applications dependency on the underlying protocols implementation.
G-Lab (i.e., http://www.german-lab.de/) is a research project for Future Internet Architecture (FIA), which focuses on problems of the Internet such as rigidness, mobility, and addressing. Where the focus of ICSY (i.e., www.icsy) was on providing the flexibility in future network architectures. An approach so-called Service Oriented Network Architecture (SONATE) is proposed to compose the protocols dynamically. SONATE is based on principles of the service-oriented architecture (SOA), where protocols are decomposed in software modules and later they are put together on demand to provide the desired service.
This composition of functionalities can be performed at various time-epochs (e.g., run-time, design-time, deployment-time). However, these epochs have trade-off in terms of the time-complexity (i.e., required setup time) and the provided flexibility. The design-time is the least time critical in comparison to other time phases, which makes it possible to utilize human-analytical capability. However, the design-time lacks the real-time knowledge of requirements and network conditions, what results in inflexible protocol graphs, and they cannot be changed at later stages on changing requirements. Contrary to the design-time, the run-time is most time critical where an application is waiting for a connection to be established, but at the same time it has maximum information to generate a protocol graph suitable to the given requirements.
Considering limitations above of different time-phases, in this thesis, a novel intermediate functional composition approach (i.e., Template-Based Composition) has been presented to generate requirements aware protocol graphs. The template-based composition splits the composition process across different time-phases to exploit the less time critical nature and human-analytical availability of the design-time, ability to instantaneously deploy new functionalities of the deployment time and maximum information availability of the run-time. The approach is successfully implemented , demonstrated and evaluated based on its performance to know the implications for the practical use.

When designing autonomous mobile robotic systems, there usually is a trade-off between the three opposing goals of safety, low-cost and performance.
If one of these design goals is approached further, it usually leads to a recession of one or even both of the other goals.
If for example the performance of a mobile robot is increased by making use of higher vehicle speeds, then the safety of the system is usually decreased, as, under the same circumstances, faster robots are often also more dangerous robots.
This decrease of safety can be mitigated by installing better sensors on the robot, which ensure the safety of the system, even at high speeds.
However, this solution is accompanied by an increase of system cost.
In parallel to mobile robotics, there is a growing amount of ambient and aware technology installations in today's environments - no matter whether in private homes, offices or factory environments.
Part of this technology are sensors that are suitable to assess the state of an environment.
For example, motion detectors that are used to automate lighting can be used to detect the presence of people.
This work constitutes a meeting point between the two fields of robotics and aware environment research.
It shows how data from aware environments can be used to approach the abovementioned goal of establishing safe, performant and additionally low-cost robotic systems.
Sensor data from aware technology, which is often unreliable due to its low-cost nature, is fed to probabilistic methods for estimating the environment's state.
Together with models, these methods cope with the uncertainty and unreliability associated with the sensor data, gathered from an aware environment.
The estimated state includes positions of people in the environment and is used as an input to the local and global path planners of a mobile robot, enabling safe, cost-efficient and performant mobile robot navigation during local obstacle avoidance as well as on a global scale, when planning paths between different locations.
The probabilistic algorithms enable graceful degradation of the whole system.
Even if, in the extreme case, all aware technology fails, the robots will continue to operate, by sacrificing performance while maintaining safety.
All the presented methods of this work have been validated using simulation experiments as well as using experiments with real hardware.

Synapses play a central role in the information propagation in the nervous system. A better understanding of synaptic structures and processes is vital for advancing nervous disease research. This work is part of an interdisciplinary project that aims at the quantitative examination of components of the neuromuscular junction, a synaptic connection between a neuron and a muscle cell.
The research project is based on image stacks picturing neuromuscular junctions captured by modern electron microscopes, which permit the rapid acquisition of huge amounts of image data at a high level of detail. The large amount and sheer size of such microscopic data makes a direct visual examination infeasible, though.
This thesis presents novel problem-oriented interactive visualization techniques that support the segmentation and examination of neuromuscular junctions.
First, I introduce a structured data model for segmented surfaces of neuromuscular junctions to enable the computational analysis of their properties. However, surface segmentation of neuromuscular junctions is a very challenging task due to the extremely intricate character of the objects of interest. Hence, such problematic segmentations are often performed manually by non-experts and thus requires further inspection.
With NeuroMap, I develop a novel framework to support proofreading and correction of three-dimensional surface segmentations. To provide a clear overview and to ease navigation within the data, I propose the surface map, an abstracted two-dimensional representation using key features of the surface as landmarks. These visualizations are augmented with information about automated segmentation error estimates. The framework provides intuitive and interactive data correction mechanisms, which in turn permit the expeditious creation of high-quality segmentations.
While analyzing such segmented synapse data, the formulation of specific research questions is often impossible due to missing insight into the data. I address this problem by designing a generic parameter space for segmented structures from biological image data. Furthermore, I introduce a graphical interface to aid its exploration, combining both parameter selection as well as data representation.

This thesis presents a novel, generic framework for information segmentation in document images.
A document image contains different types of information, for instance, text (machine printed/handwritten), graphics, signatures, and stamps.
It is necessary to segment information in documents so that to process such segmented information only when required in automatic document processing workflows.
The main contribution of this thesis is the conceptualization and implementation of an information segmentation framework that is based on part-based features.
The generic nature of the presented framework makes it applicable to a variety of documents (technical drawings, magazines, administrative, scientific, and academic documents) digitized using different methods (scanners, RGB cameras, and hyper-spectral imaging (HSI) devices).
A highlight of the presented framework is that it does not require large training sets, rather a few training samples (for instance, four pages) lead to high performance, i.e., better than previously existing methods.
In addition, the presented framework is simple and can be adapted quickly to new problem domains.
This thesis is divided into three major parts on the basis of document digitization method (scanned, hyper-spectral imaging, and camera captured) used.
In the area of scanned document images, three specific contributions have been realized.
The first of them is in the domain of signature segmentation in administrative documents.
In some workflows, it is very important to check the document authenticity before processing the actual content.
This can be done based on the available seal of authenticity, e.g., signatures.
However, signature verification systems expect pre-segmented signature image, while signatures are usually a part of document.
To use signature verification systems on document images, it is necessary to first segment signatures in documents.
This thesis shows that the presented framework can be used to segment signatures in administrative documents.
The system based on the presented framework is tested on a publicly available dataset where it outperforms the state-of-the-art methods and successfully segmented all signatures, while less than half of the found signatures are false positives.
This shows that it can be applied for practical use.
The second contribution in the area of scanned document images is segmentation of stamps in administrative documents.
A stamp also serves as a seal for documents authenticity.
However, the location of stamp on the document can be more arbitrary than a signature depending on the person sealing the document.
This thesis shows that a system based on our generic framework is able to extract stamps of any arbitrary shape and color.
The evaluation of the presented system on a publicly available dataset shows that it is also able to segment black stamps (that were not addressed in the past) with a recall and precision of 83% and 73%, respectively.
%Furthermore, to segment colored stamps, this thesis presents a novel feature set which is based on intensity gradient, is able to extract unseen, colored, arbitrary shaped, textual as well as graphical stamps, and outperforms the state-of-the-art methods.
The third contribution in the scanned document images is in the domain of information segmentation in technical drawings (architectural floorplans, maps, circuit diagrams, etc.) containing usually a large amount of graphics and comparatively less textual components. Further, as in technical drawings, text is overlapping with graphics.
Thus, automatic analysis of technical drawings uses text/graphics segmentation as a pre-processing step.
This thesis presents a method based on our generic information segmentation framework that is able to detect the text, which is touching graphical components in architectural floorplans and maps.
Evaluation of the method on a publicly available dataset of architectural floorplans shows that it is able to extract almost all touching text components with precision and recall of 71% and 95%, respectively.
This means that almost all of the touching text components are successfully extracted.
In the area of hyper-spectral document images, two contributions have been realized.
Unlike normal three channels RGB images, hyper-spectral images usually have multiple channels that range from ultraviolet to infrared regions including the visible region.
First, this thesis presents a novel automatic method for signature segmentation from hyper-spectral document images (240 spectral bands between 400 - 900 nm).
The presented method is based on a part-based key point detection technique, which does not use any structural information, but relies only on the spectral response of the document regardless of ink color and intensity.
The presented method is capable of segmenting (overlapping and non-overlapping) signatures from varying backgrounds like, printed text, tables, stamps, logos, etc.
Importantly, the presented method can extract signature pixels and not just the bounding boxes.
This is substantial when signatures are overlapping with text and/or other objects in image. Second, this thesis presents a new dataset comprising of 300 documents scanned using a high-resolution hyper-spectral scanner. Evaluation of the presented signature segmentation method on this hyper-spectral dataset shows that it is able to extract signature pixels with the precision and recall of 100% and 79%, respectively.
Further contributions have been made in the area of camera captured document images. A major problem in the development of Optical Character Recognition (OCR) systems for camera captured document images is the lack of labeled camera captured document images datasets. In the first place, this thesis presents a novel, generic, method for automatic ground truth generation/labeling of document images. The presented method builds large-scale (i.e., millions of images) datasets of labeled camera captured / scanned documents without any human intervention. The method is generic and can be used for automatic ground truth generation of (scanned and/or camera captured) documents in any language, e.g., English, Russian, Arabic, Urdu. The evaluation of the presented method, on two different datasets in English and Russian, shows that 99.98% of the images are correctly labeled in every case.
Another important contribution in the area of camera captured document images is the compilation of a large dataset comprising 1 million word images (10 million character images), captured in a real camera-based acquisition environment, along with the word and character level ground truth. The dataset can be used for training as well as testing of character recognition systems for camera-captured documents. Various benchmark tests are performed to analyze the behavior of different open source OCR systems on camera captured document images. Evaluation results show that the existing OCRs, which already get very high accuracies on scanned documents, fail on camera captured document images.
Using the presented camera-captured dataset, a novel character recognition system is developed which is based on a variant of recurrent neural networks, i.e., Long Short Term Memory (LSTM) that outperforms all of the existing OCR engines on camera captured document images with an accuracy of more than 95%.
Finally, this thesis provides details on various tasks that have been performed in the area closely related to information segmentation. This includes automatic analysis and sketch based retrieval of architectural floor plan images, a novel scheme for online signature verification, and a part-based approach for signature verification. With these contributions, it has been shown that part-based methods can be successfully applied to document image analysis.

Stochastic Network Calculus (SNC) emerged from two branches in the late 90s:
the theory of effective bandwidths and its predecessor the Deterministic Network
Calculus (DNC). As such SNC’s goal is to analyze queueing networks and support
their design and control.
In contrast to queueing theory, which strives for similar goals, SNC uses in-
equalities to circumvent complex situations, such as stochastic dependencies or
non-Poisson arrivals. Leaving the objective to compute exact distributions behind,
SNC derives stochastic performance bounds. Such a bound would, for example,
guarantee a system’s maximal queue length that is violated by a known small prob-
ability only.
This work includes several contributions towards the theory of SNC. They are
sorted into four main contributions:
(1) The first chapters give a self-contained introduction to deterministic net-
work calculus and its two branches of stochastic extensions. The focus lies on the
notion of network operations. They allow to derive the performance bounds and
simplifying complex scenarios.
(2) The author created the first open-source tool to automate the steps of cal-
culating and optimizing MGF-based performance bounds. The tool automatically
calculates end-to-end performance bounds, via a symbolic approach. In a second
step, this solution is numerically optimized. A modular design allows the user to
implement their own functions, like traffic models or analysis methods.
(3) The problem of the initial modeling step is addressed with the development
of a statistical network calculus. In many applications the properties of included
elements are mostly unknown. To that end, assumptions about the underlying
processes are made and backed by measurement-based statistical methods. This
thesis presents a way to integrate possible modeling errors into the bounds of SNC.
As a byproduct a dynamic view on the system is obtained that allows SNC to adapt
to non-stationarities.
(4) Probabilistic bounds are fundamentally different from deterministic bounds:
While deterministic bounds hold for all times of the analyzed system, this is not
true for probabilistic bounds. Stochastic bounds, although still valid for every time
t, only hold for one time instance at once. Sample path bounds are only achieved by
using Boole’s inequality. This thesis presents an alternative method, by adapting
the theory of extreme values.
(5) A long standing problem of SNC is the construction of stochastic bounds
for a window flow controller. The corresponding problem for DNC had been solved
over a decade ago, but remained an open problem for SNC. This thesis presents
two methods for a successful application of SNC to the window flow controller.

Mixed-signal systems combine analog circuits with digital hardware and software systems. A particular challenge is the sensitivity of analog parts to even small deviations in parameters, or inputs. Parameters of circuits and systems such as process, voltage, and temperature are never accurate; we hence model them as uncertain values (‘uncertainties’). Uncertain parameters and inputs can modify the dynamic behavior and lead to properties of the system that are not in specified ranges. For verification of mixed- signal systems, the analysis of the impact of uncertainties on the dynamical behavior plays a central role.
Verification of mixed-signal systems is usually done by numerical simulation. A single numerical simulation run allows designers to verify single parameter values out of often ranges of uncertain values. Multi-run simulation techniques such as Monte Carlo Simulation, Corner Case simulation, and enhanced techniques such as Importance Sampling or Design-of-Experiments allow to verify ranges – at the cost of a high number of simulation runs, and with the risk of not finding potential errors. Formal and symbolic approaches are an interesting alternative. Such methods allow a comprehensive verification. However, formal methods do not scale well with heterogeneity and complexity. Also, formal methods do not support existing and established modeling languages. This fact complicates its integration in industrial design flows.
In previous work on verification of Mixed-Signal systems, Affine Arithmetic is used for symbolic simulation. This allows combining the high coverage of formal methods with the ease-of use and applicability of simulation. Affine Arithmetic computes the propagation of uncertainties through mostly linear analog circuits and DSP methods in an accurate way. However, Affine Arithmetic is currently only able to compute with contiguous regions, but does not permit the representation of and computation with discrete behavior, e.g. introduced by software. This is a serious limitation: in mixed-signal systems, uncertainties in the analog part are often compensated by embedded software; hence, verification of system properties must consider both analog circuits and embedded software.
The objective of this work is to provide an extension to Affine Arithmetic that allows symbolic computation also for digital hardware and software systems, and to demonstrate its applicability and scalability. Compared with related work and state of the art, this thesis provides the following achievements:
1. The thesis introduces extended Affine Arithmetic Forms (XAAF) for the representation of branch and merge operations.
2. The thesis describes arithmetic and relational operations on XAAF, and reduces over-approximation by using an LP solver.
3. The thesis shows and discusses ways to integrate this XAAF into existing modeling languages, in particular SystemC. This way, breaks in the design flow can be avoided.
The applicability and scalability of the approach is demonstrated by symbolic simulation of a Delta-Sigma Modulator and a PLL circuit of an IEEE 802.15.4 transceiver system.

Knowing the extent to which we rely on technology one may think that correct programs are nowadays the norm. Unfortunately, this is far from the truth. Luckily, possible reasons why program correctness is difficult often come hand in hand with some solutions. Consider concurrent program correctness under Sequential Consistency (SC). Under SC, instructions of each program's concurrent component are executed atomically and in order. By using logic to represent correctness specifications, model checking provides a successful solution to concurrent program verification under SC. Alas, SC’s atomicity assumptions do not reflect the reality of hardware architectures. Total Store Order (TSO) is a less common memory model implemented in SPARC and in Intel x86 multiprocessors that relaxes the SC constraints. While the architecturally de-atomized execution of stores under TSO speeds up program execution, it also complicates program verification. To be precise, due to TSO’s unbounded store buffers, a program’s semantics under TSO might be infinite. This, for example, turns reachability under SC (a PSPACE-complete task) into a non-primitive-recursive-complete problem under TSO. This thesis develops verification techniques targeting TSO-relaxed programs. To be precise, we present under- and over-approximating heuristics for checking reachability in TSO-relaxed programs as well as state-reducing methods for speeding up such heuristics. In a first contribution, we propose an algorithm to check reachability of TSO-relaxed programs lazily. The under-approximating refinement algorithm uses auxiliary variables to simulate TSO’s buffers along instruction sequences suggested by an oracle. The oracle’s deciding characteristic is that if it returns the empty sequence then the program’s SC- and TSO-reachable states are the same. Secondly, we propose several approaches to over-approximate TSO buffers. Combined in a refinement algorithm, these approaches can be used to determine safety with respect to TSO reachability for a large class of TSO-relaxed programs. On the more technical side, we prove that checking reachability is decidable when TSO buffers are approximated by multisets with tracked per address last-added-values. Finally, we analyze how the explored state space can be reduced when checking TSO and SC reachability. Intuitively, through the viewpoint of Shasha-and-Snir-like traces, we exploit the structure of program instructions to explain several state-space reducing methods including dynamic and cartesian partial order reduction.

Multiway Quicksort, i.e., partitioning the input in one step around several pivots, has received much attention since Java 7’s runtime library uses a new dual-pivot method that outperforms by far the old Quicksort implementation. The success of dual-pivot Quicksort is most likely due to more efficient usage of the memory hierarchy, which gives reason to believe that further improvements are possible with multiway Quicksort.
In this dissertation, I conduct a mathematical average-case analysis of multiway Quicksort including the important optimization to choose pivots from a sample of the input. I propose a parametric template algorithm that covers all practically relevant partitioning methods as special cases, and analyze this method in full generality. This allows me to analytically investigate in depth what effect the parameters of the generic Quicksort have on its performance. To model the memory-hierarchy costs, I also analyze the expected number of scanned elements, a measure for the amount of data transferred from memory that is known to also approximate the number of cache misses very well. The analysis unifies previous analyses of particular Quicksort variants under particular cost measures in one generic framework.
A main result is that multiway partitioning can reduce the number of scanned elements significantly, while it does not save many key comparisons; this explains why the earlier studies of multiway Quicksort did not find it promising. A highlight of this dissertation is the extension of the analysis to inputs with equal keys. I give the first analysis of Quicksort with pivot sampling and multiway partitioning on an input model with equal keys.

Nowadays, almost every newly developed system contains embedded systems for controlling system functions. An embedded system perceives its environment via sensors, and interacts with it using actuators such as motors. For systems that might damage their environment by faulty behavior usually a safety analysis is performed. Security properties of embedded systems are usually not analyzed at all. New developments in the area of Industry 4.0 and Internet of Things lead to more and more networking of embedded systems. Thereby, new causes for system failures emerge: Vulnerabilities in software and communication components might be exploited by attackers to obtain control over a system. By targeted actions a system may also be brought into a critical state in which it might harm itself or its environment. Examples for such vulnerabilities, and also successful attacks, became known over the last few years.
For this reason, in embedded systems safety as well as security has to be analyzed at least as far as it may cause safety critical failures of system components.
The goal of this thesis is to describe in one model how vulnerabilities from the security point of view might influence the safety of a system. The focus lies on safety analysis of systems, so the safety analysis is extended to encompass security problems that may have an effect on the safety of a system. Component Fault Trees are very well suited to examine causes of a failure and to find failure scenarios composed of combinations of faults. A Component Fault Tree of an analyzed system is extended by additional Basic Events that may be caused by targeted attacks. Qualitative and quantitative analyses are extended to take the additional security events into account. Thereby, causes of failures that are based on safety as well as security problems may be found. Quantitative or at least semi-quantitative analyses allow to evaluate security measures more detailed, and to justify the need of such.
The approach was applied to several example systems: The safety chain of the off-road robot RAVON, an adaptive cruise control, a smart farming scenario, and a model of a generic infusion pump were analyzed. The result of all example analyses was that additional failure causes were found which would not have been detected in traditional Component Fault Trees. In the analyses also failure scenarios were found that are caused solely by attacks, and that are not depending on failures of system components. These are especially critical scenarios which should not happen in this way, as they are not found in a classical safety analysis. Thus the approach shows its additional benefit to a safety analysis which is achieved by the application of established techniques with only little additional effort.

Distributed systems are omnipresent nowadays and networking them is fundamental for the continuous dissemination and thus availability of data. Provision of data in real-time is one of the most important non-functional aspects that safety-critical networks must guarantee. Formal verification of data communication against worst-case deadline requirements is key to certification of emerging x-by-wire systems. Verification allows aircraft to take off, cars to steer by wire, and safety-critical industrial facilities to operate. Therefore, different methodologies for worst-case modeling and analysis of real-time systems have been established. Among them is deterministic Network Calculus (NC), a versatile technique that is applicable across multiple domains such as packet switching, task scheduling, system on chip, software-defined networking, data center networking and network virtualization. NC is a methodology to derive deterministic bounds on two crucial performance metrics of communication systems:
(a) the end-to-end delay data flows experience and
(b) the buffer space required by a server to queue all incoming data.
NC has already seen application in the industry, for instance, basic results have been used to certify the backbone network of the Airbus A380 aircraft.
The NC methodology for worst-case performance analysis of distributed real-time systems consists of two branches. Both share the NC network model but diverge regarding their respective derivation of performance bounds, i.e., their analysis principle. NC was created as a deterministic system theory for queueing analysis and its operations were later cast in a (min,+)-algebraic framework. This branch is known as algebraic Network Calculus (algNC). While algNC can efficiently compute bounds on delay and backlog, the algebraic manipulations do not allow NC to attain the most accurate bounds achievable for the given network model. These tight performance bounds can only be attained with the other, newly established branch of NC, the optimization-based analysis (optNC). However, the only optNC analysis that can currently derive tight bounds was proven to be computationally infeasible even for the analysis of moderately sized networks other than simple sequences of servers.
This thesis makes various contributions in the area of algNC: accuracy within the existing framework is improved, distributivity of the sensor network calculus analysis is established, and most significantly the algNC is extended with optimization principles. They allow algNC to derive performance bounds that are competitive with optNC. Moreover, the computational efficiency of the new NC approach is improved such that this thesis presents the first NC analysis that is both accurate and computationally feasible at the same time. It allows NC to scale to larger, more complex systems that require formal verification of their real-time capabilities.