4 Dagstuhl-Meeting 2007 GK 623: Performance Guarantees for Computer Systems Computer Science Department Saarland University Speaker: Prof. Raimund Seidel The research program of this research training group aims at a comprehensive and mathematically rigorous understanding of the concept of performance guarantee in the sense of predictable running time, provable correctness, and sufficient quality. Provable correctness is the guarantee that a desired functionality is indeed achieved, e.g. the collision avoidance of a moving geometric body in a virtual reality scene, or the achievement of specified properties of a simple reactive system or even of an operating system. Predictable running time means that the resource usage of a program in terms its computing time, storage requirements, and accesses to the various levels of the memory hierarchy can be a priori quantified as exactly as possible. Beyond the design of efficient algorithms this aspect of performance also implies for instance guaranteed reaction times in real-time applications or anytime user-acceptable response times in information systems. Sufficient quality is to mean that the achieved functionality is appropriate to the requirements of the application at hand. This aspect of performance guarantee can for instance refer to approximation quality in optimization problems, to accuracy in graphics computations, or to the relevance of the answer documents in a web query. The research in this research training group draws on the interplay and conflicts between these three main aspects of performance guarantees. Simple procedures are easy to verify, but most of the time they are inefficient. On the other hand, efficient procedures are often complex and very difficult to verify. The comprehensive verifiction of the correctness of a system is often bound to fail due to the horrendous computational expenditure. But the restriction to the certification of some critical properties, e.g. the mutual exclusion of two trains using the same track section, can make the verification problem tractable. Computer science has developed a formidable repertoir of methods for the analysis of the worst case behaviour of single, stand-alone algorithms. But the situation is very different for the case of complex systems in which many algorithms interact and where the efficiency of the individual algorithms depends on environmental parameters such as data distributions, load profiles, or resource contention. Quantitative statements about the reaction or response times of such complex systems are only possible with severe simplifications and with restricted system functionality and quality. Finally, the quality of a computation or a search necessarily depends on the resource expenditure and therefore on the guaranteeable running time. 2

6 Dagstuhl-Meeting 2007 Formal and Pervasive Verification: From Hardware to Operating Systems Eyad Alkassar GK 623 Performance Guarantees for Computer Systems Saarland University In todays world we rely in nearly all security-relevant spheres of our daily life on computer systems. From simple circuits in traffic lights to complex control software of nuclear power plants we believe in the infallibility of computers. But who or what ensures that these systems really do what they are intended to do? One way would be to test all possible cases, or at least a set of representative ones. With a complexity of programs increasing in a exponential way, this approach is futile if one wants to ensure that no bugs are left undetected. The alternative to testing is proving correctness in a mathematical sense, by first formulating accurate models of the real-world system and then verifying formal assertions over these models. This approach was pioneered by, among others, Dijkstra, Flyod, Lamport and Hoare. J. S. Moore, principal researcher of the CLI stack project, declares the formal verification of a practical computing system from transistors to software as a grand challenge problem. A main goal of the Verisoft project 1 is to bear this challenge. In the academic system, a subproject of the Verisoft project, a general-purpose computer system, covering all layers from the gate-level hardware description to communicating concurrent programs is designed, implemented and verified. The aim is to build accurate formal models without hidden assumptions, that are modular and easily extendable. The verification should be pervasive throughout all layers of abstraction and take advantage of computer-aided verification tools. Currently we are engaged in the verification of a small page-fault handler, written in a C-like language with inline Assembler code. A page-fault handler realizes process virtualization and hence is an integral part of any modern operating system. The correctness of the page-fault handler depends a.o. on the correctness of of the compiler and the correctness of the underlying hardware, in particular the memory management unit and the interrupt handling of page faults. Sometimes the page fault handler has to swap out data from the main memory to the hard disk. Hence, also the hard disk and its interaction with the processor on gate-level as on specification-level has to be modeled. Furthermore an elementary hard disk driver written in assembler was proven to be correct. This result will hopefully enable us soon to verify a small, but complete OS kernel. 1 4

7 GRK 623: Leistungsgarantien für Rechnersysteme Realtime Ray Tracing Techniques for Triangular Dynamic Meshes and Volumetric Datasets Heiko Fridrich GK 623 Performance Guarantees for Computer Systems Saarland University Ray tracing is a well known image synthesis technique, for rendering 2D images from 3D scenes. It solves many of todays problems that are inherent in the rendering algorithms implemented in current graphics boards. These problems include optical correct computations of shadows, reflections, and photo realistic images. Unfortunately ray tracing has been considered as too slow for realtime applications and thus only little research has been done. In recent years due to new algorithms and optimized implementations the ideas of ray tracing sparked a new interesest in the research community since realtime rendering has become reality. 1 Realtime Ray Tracing Techniques for Triangular Dynamic Meshes Ray tracing requires for realtime performance special index structures (IS) that allow for efficient visibility queries. A visibility query answers for a given point and direction what can be seen next. Unfortunately the construction of these data structures is a computational expensive task and all realtime rendering effort is vain endeavor if we have to rebuilt these IS for each new frame when objects deform. In our research we found new IS and algorithms that allow to ray trace animated sequences without the need to rebuilt the IS. The basic idea is to analyse in a preprocess the sequence of the deforming objects, compute a set of cluster with coherent motion, substract the coherent motion from the cluster and finally capture the residual motion in a special IS. Our research in conjunction with other work at the UdS Graphics Chair, MPII AG4, and the University of Utah solves the problem of ray tracing dyamic scenes to a large extend. 2 Realtime Ray Tracing Techniques for Volumetric Datasets My second research interest focuses on the use of ray tracing techniques for volumetric datasets. Volumetric datasets do not only describe a single surface but a complete volume. Most of these datasets originate from scanners like CT or numerical simulations and consists of points in space that describe a physical property. In our research we focus on the realtime rendering of surfaces and semi-transparent images with just a single IS. In past for each render mode (surface/semi-transparent) a different IS had to be exploited for an efficient rendering. Another problem is that volume datasets tend to be very large in size with up to several gigabytes. A part of our research aims to render such massive datasets on commodity PCs with very compact representations of the necessary data structues and optimized out-of-core memory techniques. Finally we try to minimize the enormous memory-bandwith that is inherent in volume rendering by finding appropriate linear functions of the volumetric space. 5

8 Dagstuhl-Meeting 2007 Static Analysis of Caches Performance, Predictability, Price Daniel Grund GK 623 Performance Guarantees for Computer Systems Saarland University Our work addresses the use of processor caches in hard real-time systems. Embedded systems as they occur in application domains such as automotive, aeronautics, and industrial automation often have to satisfy hard real-time constraints. Off-line guarantees on the worst-case execution time of each task have to be derived using safe methods. Such methods must be conservative, i.e. they must statically overapproximate the dynamic behaviour of a task on all possible inputs and hardware states. Caches, deep pipelines, and all kinds of speculation are increasingly used in such systems to improve average-case performance. But at the same time they increase the variability of execution times of instructions due to the possibility of timing accidents with high penalties: a cache miss may take 100 times as long as a cache hit. Average-case performance of a cache can be measured by its miss-rate, the percentage of accesses where the requested data is not stored in the cache. The notion of predictability captures how good a system lends itself to static analyses and how tight the obtained bounds (overapproximations) can be determined. For caches the cache replacement policy (that determines which element to replace upon a cache miss) is dominating the predictability of a cache. And of course the price of a policy in terms of hardware implementation costs differs from case to case. Here the number of status bits that are necessary to keep track of the replacement order is a very good measure. We defined a large class of cache policies (including the most prominent ones) where a policy can be modeled by an ordered set of permutations. Using this representation we are able to compute values for the above metrics and estimates for the performance that depends on a characterization of the executed software. Computing values for the predictability metrics reduces to shortest and longest path problems. Deriving the number of necessary status bits essentially is the problem of determining the group order of a permutation group. To estimate the miss-rates we use a markov chain model where the steady-state probabilities have to be determined. All these computations are (or will be) integrated into a systematic search procedure to find policies meeting given requirements. The huge number of policies in the class we conider demands several optimizations: e.g. determining a bound on the value of a metric even though the set of permutations is only partially known. Or the iterative construction and solution of the markov model that prevents unnecessary states to be considered. 6

9 GRK 623: Leistungsgarantien für Rechnersysteme Theoretical Analysis of Evolutionary Algorithms Edda Happ GK 623 Performance Guarantees for Computer Systems Saarland University Currently, I work in the field of theoretical analysis of evolutionary algorithms. The idea behind evolutionary algorithms is to use principles inspired by evolution to create algorithms for optimization problems. The principles of evolution are: 1. There exists at any time a number of species on earth. 2. The species change over time, either by mutation or by recombination. Mutation means that through random changes the individual slowly evolves into something different, while recombination (or crossover) is the combination of the genetic material of two different species into a new evolving species. 3. According to how good the species is adapted to live in its environment, it either survives or it dies out. This is Darwin s principle of the survival of the fittest. Taking these principles and using them for optimization algorithms translates as follows: 1. Keep a population of candidate solutions. 2. Do either mutation or crossover on these candidate solutions to generate new candidate solutions. 3. According on the fitness of the new candidate solutions, they can replace some of the old solutions. In creating such an evolutionary algorithm, the important tasks are to find a good representation of the candidate solutions, to devise reasonable mutation and crossover operators, and to decide on a selection method appropriate to the problem. Since evolutionary algorithms are easy to design (even without deep knowledge of the problem to be solved), easy to implement, and highly reusable, they have been implemented and analytically investigated a lot. However, the theoretical analysis of these algorithms is not as sophisticated yet. It involves elaborate probability theory and is still in its infancy. My current research includes a tight analysis of an evolutionary algorithm for the all-pairs shortest paths problem as well as for the single-source shortest path problem. 7

10 Dagstuhl-Meeting 2007 Hardness of Algebraical Problems Christian Hoffmann GK 623 Performance Guarantees for Computer Systems Saarland University Algebraic Complexity Theory aims at classifying the hardness of arithmetic or algebraical problems. A common formalization of such problems and the corresponding algorithms are multivariate polynomials: in the beginning only the input (indeterminates X 1,..., X n ) is available and each arithmetic operation (+,,,...) produces a new polynomial from two of the already available polynomials. We study models computing multivariate polynomials such as arithmetic circuits, arithmetic formulas and also commutative algebraic branching programs (ABPs). An important question is: what size must an arithmetic circuit (arithmetic formula, ABP resp.) have to compute some specific given polynomial? The search for lower bounds of this kind is a driving problem for the whole field. We separated the ABP model from the arithmetic circuit model giving an Ω(n 2 ) bound for a polynomial for which arithmetic circuits of size O(n log n) are known. An outstanding open problem is to give superpolynomial lower bounds for any general model. Polynomial identity testing is another important problem of our research. It asks whether the polynomial computed by some arithmetic circuit is the zero polynomial. Whereas randomized polynomial time algorithms are available for this problem, all known deterministic algorithms take exponential time. Recent research has revealed a connection between polynomial identity testing and superpolynomial lower bounds in algebraic models: If we had a superpolynomial lower bound on the arithmetic circuit size of some multilinear polynomial, we could deterministically test for polynomial identity in subexponential time. This result is true for the arithmetic circuit model and heavily uses the power of this model. Giving a polynomial size construction for division in the ABP model, we were able to establish a similar result for the weaker ABP model: if we had a superpolynomial lower bound on the ABP size of some multilinear polynomial, we could deterministically in subexponential time solve the polynomial identity testing problem for multilinear polynomials given by ABPs. Graph polynomials encode much information about the underlying graph. Another topic of our current research is to study how hard it is to evaluate these polynomials. One approach is to find a point which gives a well known hard-to-compute property of the graph (for example the number of 3-colorings) and to reduce the evaluation at other points to the evaluation at this hard-to-evaluate point. 8

11 GRK 623: Leistungsgarantien für Rechnersysteme Exploiting User Search Behavior for Information Retrieval Julia Luxenburger GK 623 Performance Guarantees for Computer Systems Saarland University The analysis of observed user search and browsing behaviour is a valuable information source for many aspects of web search result ranking. From the monitoring of user interactions with a search engine we are able to draw conclusions of different flavor: The sequence of queries, a user subsequently poses, allows us to group related queries serving the same information need and learn query reformulation patterns. The query-result pages which were clicked on and the ones which were not clicked on after a user saw the summary snippets of the top-10 results, lead us to inferences on the relevance, respectively irrelevance, of result pages to their corresponding queries, as well as to inferences on the general quality of these pages. The analysis of complete user search sessions enables us to identify frequent user interaction patterns, as well as deficiencies of state-of-the-art web search 1. One focus of our work in this area is the incorporation of implicit user feedback into Web link analysis which constitutes an important ranking feature. State-of-the-art authority analysis methods on the Web linkage graph such as the PageRank 2 algorithm are based on the assumption that a web page author endorses a Web page when creating a hyperlink to that page. This kind of intellectual user input can be generalized to a user endorsing a query-result page when visiting that page, and moreover disapproving a result page when prefering a lower-ranked result page. We study link analysis methods 3 that enhance PageRank by incorporating additional user assessments based on query logs and click streams, including negative feedback when a query-result page does not satisfy the user demand or is even perceived as spam. Our methods use various novel forms of Markov, respectively Markov reward, models whose states correspond to users and queries in addition to Web pages and whose links also reflect the relationships derived from query-result clicks, query refinements, and explicit ratings. Our experiments, based on real-life query-log and click-stream traces on an excerpt of the English version of the Wikipedia encyclopedia, indicate the potential of our methods. 1 N. KAMMENHUBER, A. FELDMANN, J. LUXENBURGER, AND G. WEIKUM: Web Search Clickstreams IMC(2006). 2 S. BRIN AND L. PAGE: The Anatomy of a Large-Scale Hypertextual Web Search Engine WWW(1998). 3 J. LUXENBURGER AND G. WEIKUM: Exploiting Community Behavior for Enhanced Link Analysis and Web Search WebDB(2006). 9

12 Dagstuhl-Meeting 2007 Prediction and Alignment of RNA Secondary Structures Mathias Möhl GK 623 Performance Guarantees for Computer Systems Saarland University The genetic information of living organisms is encoded in deoxyribonucleic acid (DNA). In order to read this information, the organism builds ribonucleic acids (RNAs) that use the DNA as a template. These RNA molecules are then either translated into proteins or perform their own individual function within the cell. Similar to DNA, an RNA molecule can be abstracted as a sequence of bases. While in the case of DNA this sequence represents the entire information captured in the molecule, the function of RNA does not solely depend on the sequence itself. While DNA molecules form double-stranded helices, the single-stranded RNA molecules form complex secondary structures (so to say shapes ). Since the secondary structure determines the function of an RNA molecule to a large extend, there is an active field of research in bioinformatics that is concerned with RNA secondary structures. While determining the base sequence is a well-established method in biology, determining the secondary structure with experiments is much more difficult. Therefore, several algorithms have been developed, that predict for a given RNA sequence likely secondary structures. Another important task is the alignment of RNA structures, that is, the identification of similar regions among a set of given RNAs. Also of practical interest is a combined form of structure prediction and alignment that predicts similar secondary structures for a set of RNA sequences and aligns them at the same time. Most existing algorithms in this field are limited to structures that do not contain so-called pseudoknots. This restriction allows the use of efficient dynamic programming algorithms since pseudoknot-free structures can recursively be decomposed into smaller fragments. While for arbitrary pseudoknots, structure prediction has been shown to be NP-hard for many cases, with more complex recursion schemes it is also possible to admit certain limited kinds of pseudoknots 1. As an alternative I explore in my PhD the development of algorithms that are not limited in the kind of pseudoknots, but in the number of pseudoknots that they can handle. This decision is motivated by the fact that no biologically motivated restrictions on the kinds of pseudoknots occurring in nature are known, whereas in general practice, pseudoknots occur rarely. Currently I develop an algorithm for the alignment of RNAs with arbitrary pseudoknots whose worst case complexity is exponential, but that is fixed-parameter tractable for a parameter that only depends on the number and size of the pseudoknots. My long term goal is to develop similar algorithms for structure prediction and the combined version of both problems. 1 AKUTSU, T.: Dynamic programming algorithms for RNA secondary structure prediction with pseudoknots Discrete Applied Mathematics, 2000, 104,

13 GRK 623: Leistungsgarantien für Rechnersysteme Information Extraction for Ontology Learning Fabian M. Suchanek GK 623 Performance Guarantees for Computer Systems Saarland University Today s search engines are very successful at finding Web pages that contain certain keywords. For example, it is very easy to find a Web page about London, the capital of the United Kingdom. However, when it comes to more complex queries, these search engines fail. For example, it is close to impossible to ask Google how many other cities in the world are also called London. To answer these queries, we need a huge structure of world knowledge an ontology. Technically, an ontology is a graph, where the nodes are entities (e.g. London/UK or the concept of a city) and the edges are semantic relations between them (e.g. locatedin or isa). The goal of my PhD is to create a huge high-quality ontology. As a first step, we have extracted ontological knowledge from the large online encyclopedia Wikipedia. Unlike previous approaches, our approach exploits the category system of Wikipedia. For example, the page about London/UK is in the category Cities in the UK. This tells us (1) that London/UK is a city and (2) that London/UK is located in the United Kingdom. By sophisticated heuristics, we have been able to combine these ontological data with data from WordNet, the semantic lexicon of the English language. The result is YAGO 1, the largest formal ontology available today. The semantics of YAGO is given by logical axioms. We have shown that the consistency of YAGO is decidable, that its deductive closure is unique and finite and that its canonical base (the smallest equivalent sub-ontology) is also unique and finite. YAGO knows 4 cities called London. As a second step, we focused on extracting knowledge from natural language text documents. Our system, LEILA 2, is given a semantic relationship (such as e.g. locatedin) and a corpus of documents. In a first phase, it finds text patterns that express the semantic relationship. For example, the pattern X is located in Y expresses the locatedin relationship. In a second phase, LEILA generalizes these patterns by machine learning techniques. In a third phase, it finds instances of the generalized patterns in the corpus and extracts new pairs of entities that stand in the relationship. Different from previous approaches, LEILA uses deep syntactic patterns instead of surface text patterns, which makes it more robust to variation of the patterns. Thereby, LEILA consistently outperforms previous approaches. In a third step, LEILA and YAGO shall be combined in a feedback loop: LEILA shall add new facts to YAGO. In turn, YAGO shall help LEILA to extract new knowledge. 1 FABIAN M. SUCHANEK, GJERGJI KASNECI, GERHARD WEIKUM: YAGO A Core of Semantic knowledge. WWW See the Web interface at 2 FABIAN M. SUCHANEK, GEORGIANA IFRIM, GERHARD WEIKUM: Combining linguistic and statistical analysis to extract relations from Web documents. KDD

14 Dagstuhl-Meeting 2007 Polyhedral Vertex Enumeration Hans Raj Tiwary GK 623 Performance Guarantees for Computer Systems Saarland University As part of GK, my research has concentrated primarily on Polyhedral Vertex Enumeration with an occasional detour into unrelated short-term problems. The main object of my research - a polytope - can be described as the convex hull of a minimal finite set V of points (vertices of the polytope) in R d or as intersection of a minimal finite set of halfspaces (facets) H, each described by an inequality. The problem of vertex enumeration asks one to enumerate all the elements of V given the elements of H. The number of vertices can be anywhere between Ω( H 2 d ) and O( H d 2 ). This wide range of output size suggests that we should look for algorithms that are polynomial in the input and output size of the problem. In case of non-degenerate input, i.e. polytopes for which none of the vertices are contained in more than d facets or none of the facets contain more than d vertices, output sensitive polynomial algorithms exist but degeneracy in input is hard to handle. Bad examples are known for all the existing algorithms that handle degeneracy. A problem, somewhat related to Vertex Enumeration, is enumerating the facets of the Minkowski Sum of two polytopes given by their facets. This problem comes up frequently in computational geometry and can be used as a subroutine to enumerate the vertices of a polytope. One of my recent results is that it is not possible to enumerate the facets of the Minkowski sum of two H-polytopes in polynomial time unless P = NP. This work has been accepted for publication in the proceedings of 23rd Symposium on Computational Geometry Currently I am studying certain geometric properties of polytopes and their polar dual. In particular I am studying the sum of solid angles at the vertices of a polytope and how they relate to the sum of solid angles at the vertices of the polar dual. I have also worked on using algebraic techniques like Fast Fourier Transform (FFT) to solve certain geometric questions like computing the centroid of the vertices of an arrangement of lines. The work has been accepted for publication in Workshop on Algorithms and Data Structures The use of non-geometric techniques to solve geometric problems is interesting in itself and might find use in solving more interesting problems like 3-SUM where given a set of numbers one would like to quickly (in subquadratic time) determine whether any three numbers sum to zero. 1 HANS RAJ TIWARY: On the Hardness of Minkowski Addition and Related Operations. Accepted for publication in Proceedings of 23rd Symposium on Computational Geometry, DEEPAK AJWANI, SAURABH RAY, RAIMUND SEIDEL, HANS RAJ TIWARY: On Computing the Centroid of the Vertices of an Arrangement and Related Problems. Accepted in Workshop on Algorithms and Data Structures,

15 GRK 623: Leistungsgarantien für Rechnersysteme Approximation and I/O efficient algorithms Vitaly Osipov GK 623 Performance Guarantees for Computer Systems Saarland University Two research areas which are of particular interest for me are approximation algorithms and algorithms for large data sets. As for approximation algorithms I continue working on the problem which was partially solved in my master thesis 1, that is a Maximum Weight Planar Subgraph problem. Since this problem is NP-hard we are interested in a polynomial approximate solution. A general idea of the algorithm is to construct a subgraph of a given graph, whose cycles (if any) have length at most three, i.e. a triangular structure. Observe that such graph is necessary planar. The former approach greedily constructs such a subgraph producing a suboptimal solution. We found a new algorithm that produces such triangular structure of almost maximum weight. We also analyzed our algorithm and proved that in several special cases, most notably in the case when the maximum weight planar subgraph is outerplanar, our algorithm is considerably better then the former approach. The analysis of the algorithm in general case is still open, and of great interest to me. Algorithms for large data sets are based on a different computational model. Instead of minimizing the running time of the algorithm in terms of the number of instructions as in RAM model, we concentrate on minimizing the number of input/output operations between the levels of memory hierarchy, so called External Memory model. In this area we designed, implemented and conducted experimental study of I/O efficient Breadth First Search algorithm 2,3. As a part of the system we use pipelining, a technique originated from the database community. With the growth of popularity of multicore processors we are interested in extending our sequential approach to the framework of parallel algorithms. The problems one face there include scheduling, high scalability, fast response and dynamic adaptation to the changes of the system. I also hope that the approach we develop in the external memory settings will also have an impact on the original database applications. 1 A Polynomial Time Randomized Parallel Approximation Algorithm for Finding Heavy Planar Subgraphs Master thesis under supervision of Prof. Dr. Markus Blaeser, University of Saarland, Deepak Ajwani, Ulrich Meyer, Vitaly Osipov Improved external memory BFS implementations Workshop on Algorithm engineering and experiments (ALENEX 07), New Orleans, USA, Also accepted at DIMACS implementation challenge on shortest path, Piscataway, NJ, USA, Deepak Ajwani, Ulrich Meyer, Vitaly Osipov Breadth first search on massive graphs, to appear in DI- MACS Book Series 13

16 Dagstuhl-Meeting 2007 Shape Recovery from Shading Information Oliver Vogel GK 623 Performance Guarantees for Computer Systems Saarland University Computer Vision is a discipline of Computer Science. Roughly, the goal herein is to teach computers to see. One major area of Computer Vision is retrieving shape information from images, i.e. finding the right 3D model to match an input image. This is my area of study. Currently I am focused on so-called shape-from-shading methods, which rely on the shading information of one single image as input. There are many more possibilities of recovering shape information, such as using multiple images from different viewpoints (Shape from Stereo), different lighting of a scene (Photometric Stereo), or using texture information (Shape from Texture). Figure 1: Shape from Shading: The Mozart input image (left) and the corresponding shape (right). Existing shape-from-shading methods 12 suffer from several problems: most of them make simplifying assumptions that are either unrealistic or very limiting, many are very slow and complicatied, and some do not work at all. Current state-of-the-art algorithms 3 are limited to certain lighting conditions and Lambertian reflectance properties of surfaces. I specialize in methods based on the calculus of variations and partial differential equations. In my recent work, I managed to overcome several issues with classic algorithms 4. My main focus of research lies in improving the accuracy and computation time of algorithms while keeping it as simple as possible. A further step will be to extend the model to handle more natural and general settings. To this end, it might be feasible to combine various shape-from-x techniques to achieve high quality results. 1 B.K.P. HORN AND M.J. BROOKS: Shape from Shading., MIT Press, R. ZHANG, P. TSAI, J.E. CRYER, AND M. SHAH: Shape from Shading: A Survey, IEEE Transactions on Pattern Analysis and Machine Intelligence 21(8): (1999) 3 E. PRADOS AND O. FAUGERAS: Shape from Shading: A Well-Posed Problem?, IEEE Conference on Computer Vision and Pattern Recognition: (2005) 4 O. VOGEL, A. BRUHN, J. WEICKERT, AND S. DIDAS: Direct Shape-from-Shading with Adaptive Higher Order Regularisation., In Scale-Space and Variational Methods in Computer Vision SSVM 2007, Ischia, Italy, May/June Lecture Notes in Computer Science, Springer, Berlin, accepted for publication. 14

17 GRK 623: Leistungsgarantien für Rechnersysteme Program Behavior Analysis Andrzej Wasylkowski GK 623 Quality Guarantees for Computer Systems Saarland University Programs usually follow many implicit programming rules or patterns. Programmers maintaining a program are typically not aware of all such patterns and thus introduce defects that have to be later corrected. Making them aware of patterns and places in program code that violate them can help them write correct code and fix already existing defects. My research focuses on patterns representing sequencing of method calls in JAVA programs, such as hasnext() is called before next() 1. I discover such patterns by statically analyzing program code and producing so-called object usage models. These models are finite state automata with anonymous states and transitions labeled with method calls. An example of such a model for an iterator object is shown in Figure??. iter.hasnext() iter.next() Figure 1: Typical iterator model. Each model represents one abstract object, such as a method parameter or an object created via new and so on. Based on the models, we can discover sequencing constraints on method calls, such as the one expressed above. Frequently occurring constraints become patterns and methods that violate them are reported as likely to be defective. My approach is effective and scales to industrial-sized applications. Mining models from ASPECTJ 2, which is an aspect-oriented extension to the JAVA programming language with over 36,000 methods defined in almost 3,000 classes, produces over 250,000 models in less than 14 minutes. Finding patterns stemming from those models and methods that violate them takes less than 4 minutes. I have applied the method outlined above to five opensource projects and found five previously unknown bugs along with over thirty suggestions that improve code quality. I expect object usage models to be also helpful in explaining to programmers how to use classes they are new to and plan to perform user studies to verify this expectation. I also want to investigate ways to mine and put to use more complicated patterns that the ones I am currently using. 1 A. WASYLKOWSKI: Mining Object Usage Models. Doctoral Symposium, 29th International Conference on Software Engineering (ICSE 2007) 2 15

18 Dagstuhl-Meeting 2007 DRPU: A Programmable Hardware Architecture for Real-time Ray Tracing of Coherent Dynamic Scenes Sven Woop GK 623 Performance Guarantees for Computer Systems Saarland University Ray tracing is a rendering technique capable of generating high quality photo-realistic images of three dimensional scenes. Highly accurate reflections and refraction effects can be computed by following the paths of light backward from the camera towards the light sources. Rendering speed of ray tracing algorithms has been an issue for a long period, but recently high performance software implementations have made real-time ray tracing possible. But reaching performance levels comparable to rasterization also requires dedicated hardware solutions. During my research I developed the DRPU architecture (Dynamic Ray Processing Unit) as the first programmable ray tracing hardware design for coherent dynamic scenes. For programmable shading it contains a shading processor that achieves a high level of efficiency due to SIMD processing of floating point vectors, massive multi-threading, and synchronous execution of packets of threads. A dedicated traversal and intersection unit allow for efficient ray casting even in highly dynamic scenes by using B-KD Trees as spatial index structure - a kind of Bounding Volume Hierarchy. The basic idea here is to build the B-KD Tree only once before the animation. On changed geometry some bounds stored in the data structure are recomputed by a special Update Processor but the structure of the tree is maintained. A Skinning Processor is used to compute dynamic scene changes by applying a matrix based skinning model. I did an FPGA prototype implementation of the architecture specified in my own hardware description language HWML. The prototype achieves performance levels comparable to commodity CPUs even though clocked at a 50 times lower frequency of 66 MHz. I mapped the prototype to a 130nm CMOS ASIC process that allows precise post layout performance estimates. We also did some extrapolations of the results to a 90nm version with similar hardware complexity to current GPUs. It shows that with a similar amount of hardware resources frame-rates of 80 to 280 frames per second would be possible even with complex shading at 1024x768 resolution. This would form a good basis for game play and other real-time applications. For more information about my research I refer to my publications at 16

19 GRK 623: Leistungsgarantien für Rechnersysteme Mining and Predicting Development Activities Thomas Zimmermann GK 623 Performance Guarantees for Computer Systems Saarland University My main research interest is software engineering, with a focus on improving programmer productivity. I develop techniques and tools that make both managers and developers aware of history: learning from past successes and failures, helps us create better software. My research activities cover program analysis, aspect-oriented programming, empirical studies, and in particular, the analysis of software archives. A common theme in my research is data mining. Software development produces huge amount of data such as program versions, bug reports, documentation, and electronic discussions. I applied pattern mining to find related locations and method usage patterns; with concept analysis I identified cross-cutting concerns. The ability to analyze systems of any scale is important in software engineering; all of my work is thoroughly evaluated and scales to projects such as Eclipse, one of the largest open source projects with several hundred developers and million lines of code. I analyzed the bug databases of Microsoft Windows Server 2003 to predict failure-prone components. Mining Change Histories. My erose tool learns from history and navigates developers through source code with recommendations such as Programmers who changed function f() also changed function g(). 1 My DynaMine tool mines project-specific usage pattern of methods from version histories: Developers first call begin(), then insert for several times, and finally end() on objects of class Operation. 2 I contributed to aspect-oriented programming with my HAM tool that reveals cross-cutting changes: A developer inserted calls to lock() and unlock() into 1284 different locations. 3 Mining Bug Databases. In software development, the resources for quality assurance (QA) are typically limited. A common practice among managers is to allocate most of the QA effort to those parts of a system that are expected to have most failures. In recent empirical studies I developed prediction models to support managers in this task of resource allocation. 4 1 T. ZIMMERMANN, P. WEI SGERBER, S. DIEHL, AND A. ZELLER: Mining version histories to guide software changes. Int. Conference on Software Engineering (ICSE), pages , May V. B. LIVSHITS AND T. ZIMMERMANN: Dynamine: Finding common error patterns by mining software revision histories. European Software Engineering Conference/Int. Symposium on Foundations of Software Engineering (ESEC/FSE), pages , September S. BREU AND T. ZIMMERMANN: Mining aspects from version history. Int. Conference on Automated Software Engineering (ASE), pages , A. SCHRÖTER, T. ZIMMERMANN, AND A. ZELLER: Predicting component failures at design time. Int. Symposium on Empirical Software Engineering (ISESE), pages 18 27, September

24 Dagstuhl-Meeting 2007 Securing Privacy of Mobile ehome Users Ibrahim Armac GRK 643: Software for Mobile Communication Systems Department of Computer Science, Informatik 3, RWTH Aachen University Smart environments provide value-added software services to the users by combining the functionality of multiple devices. Smart home environments are also known as ehomes. In many cases, the hardware components are available at reasonable costs. However, to offer value-added ehome services that rely on the functionalities of the individual appliances, high level software is needed. The existing ehome solutions in the market are proprietary. Thus, they are not usable in a broad sense. In our group, we try to enable low-cost ehome systems by developing component-based services that can be reused for different ehomes. Instead of developing individual solutions for each ehome, the services are just configured and by this adapted to the specific ehome before installation. This avoids individual ehome-specific implementation of services, which is obviously too cost-intensive. Considering mobile users visiting multiple environments in daily life, I am working on the migration of context-based services between different smart environments, such as homes, hotels, work places, public places, etc. This makes it possible to enable the visited environments to act in a personalized manner: The environment provides users functionalities like in their home environment. Thereby, I do not consider specific service implementations to be migrated with the user like in agent technology. Instead, the user only needs to indicate which functionalities he would like to take along, such as Personal Illumination Control or Music Follows Person. Having the wished functionalities, the visited environment can provide individual instances of the corresponding services. In this scenario it is assumed that the visited environment runs (or can install on demand) services that realize the relevant functionalities. Another assumption is that every user possesses a mobile device, such as a PDA or mobile phone. On the one hand, all necessary personalization information, i.e. the user profile (wished functionalities, preferences, etc.), can be stored on this device. On the other hand, this device can provide a uniform interaction interface to the user in different environments. An important issue in supporting mobile users in smart environments is the protection of user privacy. My solution is a context- and policy-based profile management that protects user profiles against unauthorized access. The user can define policies which allow the access to only a subset of the information contained in the user profile. This is also known as identity management. Its aim is to provide as few information as necessary to the environment that is needed for the realization of the wished functionalities. Closely connected to this issue is the security of the environments. They must be also protected against attackers. This is realized by a two-level ticket-based authentication procedure. 22

25 GRK 643: Software für mobile Kommunikationssysteme Service Negotiation Architectures in Converged Networks Juan Miguel Espinosa Carlin GRK 643: Software for mobile communication systems Department of Computer Science, Informatik 4, RWTH Aachen University 1 Research activities Fixed Mobile Convergence (FMC) is the paradigm in the telecommunications industry that allows to interconnect fixed networks (Internet, ISDN, PSTN) with their mobile counterparts (GPRS, UMTS) for the purpose of giving operators the possibility to provide services irrespective of the location, access technology and terminal type of their users. To implement the delivery of services inside such networks, the 3rd. Generation Partnership Project (3GPP) designed the IP Multimedia Subsystem (IMS), a global accessindependent and standard-based IP connectivity and service control architecture that enables various types of multimedia services to end-users using common Internet-based protocols. The intention of the IMS is to aid a form of FMC, allowing the access of multimedia and voice applications across wireless and wireline terminals The IMS Core Network is composed by several nodes, the most important being the Home Subscriber Servers and Subscriber Location Functions (HSSs and SLFs), the Call/Session Control Functions (CSCFs), the Application Servers (APs), the Media Resource Functions (MRFs), the Breakout Gateway Control Functions (BGCFs), and the PSTN gateways. Although the procedures and control interfaces for static service delivery in the IMS are well standardized and detailed, the area of service negotiation in multiple IMS environments hasn t been explored yet. In these scenarios, further considerations have to be done regarding the mechanisms used when deciding if a particular service should or should not be provided to a determined user. From this Inter-IMS service negotiation perspective, a user in a roaming network would have the possibility of either accessing her/his services in her/his home network, accessing the ones in the visited network or using the ones provided by a third party network, based on parameters like availability, cost, and/or QoS. It s important to mention that this service negotiation scheme would have a close interaction with the roaming mechanisms used inside the network. Currently, my activities are focused on the performance evaluation of available IMS testbeds, the development and testing of native IMS clients on those testbeds and the implementation and deployment of IMS services, all pointing towards the design, implementation, and evaluation of the mentioned Service Negotiation Architecture. 23

26 Dagstuhl-Meeting 2007 New Communication Primitives for Wireless Mesh Networks Tobias Heer GRK 643: Software for Mobile Communication Systems Department of Computer Science, Distributed Systems Group, RWTH Aachen University Modern networking applications require extended communication primitives like multicast, anycast, service composition, and delegation. As a matter of fact, providing these communication primitives efficiently surpasses the capabilities of the traditional point-topoint communication paradigm prevalent in today s Internet. While attempts to provide these primitives within the core network structure of the Internet have largely failed, several overlay networks like the Internet Indirection Infrastructure (i3) which enable these communication primitives have emerged. However, i3 has been designed for reliable infrastructure networks like the Internet. This makes it inapplicable in networks that exhibit a more dynamic network behavior, such as mixed ad-hoc and wireless mesh networks. Addressing, communication paradigms, name resolution, Multicast, anycast Implicit service discovery Addressing Routing Service announcement Internet Address translation Heterogeneous network UMIC Multiconnected hosts Multi-hop routing Fig. 1: Communication in UMIC. Ultra High-Speed Mobile Information and Communication (UMIC) is a cluster of excellence 1 which focuses on providing high-speed communication in wireless networks beyond the limitations of today s WiFi and wireless mesh networks. Figure 1 depicts an exemplary UMIC scenario and some challenges in UMIC. Our contribution to UMIC aims at realizing extended communication primitives in dynamic networks. The integration of these extended primitives into UMIC yields a wide range of challenges. On the one hand, the indirection infrastructure must be tightly integrated into the network to avoid an overhead due to inefficient routing and unnecessary maintenance. On the other hand, the infrastructure must abstract from the actual network topology in order to provide a consistent and clear interface for applications. The peculiarities of wireless routers and mobile hosts, such as limited processing power or battery lifetime, paired with the demand for secure communication also require to rethink the use of CPU intensive authentication and integrity protection. Therefore, alternative authentication schemes are required in order to support CPU restricted mobile devices. 1 Exzellenzcluster der DFG Exzellenzinitiative 24

27 GRK 643: Software für mobile Kommunikationssysteme Wireless Mesh Networks for public Internet access Sebastian Max GRK 643: Software for Mobile Communication Systems Chair of Communication Networks, Faculty 6, RWTH Aachen University Wireless Mesh Networks (WMNs) transparently extend the coverage of a portal (e. g. connected to the Internet) via the installation of Mesh Points (MPs) which forward data over multiple hops to the associated stations and back. One of the most important deployment scenarios for WMNs is the installation of municipaland community networks. In these public networks, a provider sets up the WMN to enable low-cost communication, either as a competitor to the wired last-mile to a residential neighborhood, or to provide wireless Internet-access for mobile users. The current research activity by the graduate school member Sebastian Max covers the following topics related to WMNs for public Internet access: Evaluation of Deployment Concepts: Unlike with traditional radio access networks, it is still unclear how to deploy WMNs in urban/downtown scenarios in an optimal way. The analysis of different deployment concepts, comprising numerous parameters from the topology to transmit power control, allows for a substantiated judgment of the design space. WMNs based on IEEE : IEEE represents the opportunity for a costefficient setup of WMNs: Through its standardization and heavy usage, hardware prices are negligible in comparison to other radio access technologies. Of course, the simplicity of IEEE is a drawback as it is not designed for multi-hop operation. Hence, new mechanisms which combine current ammendments (e. g. IEEE e,n,s) to optimize the network performance are under research. Multi-channel WMNs: While common radio access technologies use cells with different fixed frequencies to avoid interference, the structure of a WMN allows for a more flexible frequency usage: Due to the decentralized topology, each link is able to select the appropriate frequency channel and thus lower the interference burden on its neighbors. Combined with multiple radios per mesh device, the optimization space is increased drastically. Hence, algorithms are researched which optimize the channel and radio usage in a decentralized way. Furthermore, the calculation of the resulting spectral efficiency allows for a fair comparison with cell-based topologies. Economical analysis: As an orthogonal element, the presented points shall not only analyze the performance using the classical throughput and delay measures, but also take into account the economical feasibility of WMNs. 25

28 Dagstuhl-Meeting 2007 Cooperative MAC for Wireless Multi-Hop Networks Ulrich Meis GRK 643: Software for mobile Communication Systems Communication and Distributed Systems (Informatik 4) Department of Computer Science, RWTH Aachen University Wireless networks have become a very popular domain of research. One of the main reasons might be the availability of hardware that anyone can afford. Networks that span multiples of the average node s signal range, so-called multi-hop networks, originated in research studies funded by the military which is a very common phenomenon. Therefore, these so-called ad-hoc networks have applications in the military domain. Although researchers have been on the hunt for other usage scenarios for many years, it now seems apparent that there are none. However, ad-hoc networks have evolved into several other network types that have a variety of usage scenarios. Perhaps the most prominent representative of this class is the wireless mesh network. Contrary to assumptions made in ad-hoc networks, the nodes are relatively stationary and also well connected. Especially this last property makes efficient utilization of the wireless medium a hard problem in these networks. The medium access layer that is nowadays used virtually everywhere (and wireless mesh networks are no exception), IEEE , shows bad performance in well connected and especially in multi-hop networks. It has not been designed with networks of this kind in mind. In most scenarios, throughput is reduced by about 50 percent for every hop. Paths with more than 5 hops are rarely observed. IEEE s bad performance shows that there is much room for improvement. Although a new standard for wireless mesh networks is in the making, IEEE s, it seems unlikely (judging by its current design) that it will bring much improvement. Since many nodes in a wireless mesh network are stationary (there usually is a fixed backbone) it should be possible to apply more sophisticated techniques on the MAC layer than what is currently envisioned by the IEEE. Problems encountered with the combination of CS- MA/CA, high interference, and packet forwarding can be avoided if nodes apply a more cooperative approach to medium access. Currently, the author investigates the possibility of a TDMA based approach. Coordination between the nodes would have to be carried out in a distributed fashion. Synchronization is the crucial part of this procedure. It is also desirable that the approach can be implemented with todays so-called Soft-MAC interface cards, since this would make its adoption easier, cheaper, and therefore a lot more likely. The UMIC-Mesh testbed located at RWTH Aachen University provides an ideal platform for implementation and measurement. 26

29 GRK 643: Software für mobile Kommunikationssysteme Context-Aware Smart Cars Cem Mengi GRK 643: Software for Mobile Communication Systems Department of Computer Science 3 - Software Engineering RWTH Aachen University The automobile of the future will become more and more to a smart environment. An environment of an automobile thereby comprises every context that supports the automotive system. This includes soft- and hardware components inside the car, user-specific services or the environment beyond the automobile, etc. The management of such a complex system could only be achieved through standardization issues. For example the developments of the AUTOSAR and OSGi standards are the outcomes of such efforts. While AUTOSAR standardizes the softwarearchitecture for each ECU (Electronic Control Unit) and software-interfaces at application-level in an automotive system, the OSGi standard enables the deployment of services over wide area networks to local networks and devices. During my research activities I have investigated these standards to get background information of future technologies in the automotive area. Furthermore, I will model and implement a simulation environment for smart cars, where the above mentioned context is taken into account. Within this simulation environment the integrated applications and services could be tested and evaluated. Moreover, so called context models play an important role when developing context-aware services. In my work I will evaluate different context models for the domain of smart cars. The Department of Computer Science 3 at RWTH Aachen University is presently developing a simulation environment for smart homes, so-called ehomes. Especially the integration and configuration of services in a cost-effective way is of primary interest. Further investigations in this area are the composition, personalization and migration of services. These aspects apply also for smart cars but in addition mobility and real-time requirements have to be considered. In addition the communication between a smart home and a smart car is one important task that has to be fulfilled. Therefore in a next phase I will ensure the interoperability of the smart car environment with the smart home environment. Based on this, new services could be adopted, tested and evaluated. 27

30 Dagstuhl-Meeting 2007 USSA: A Unified Service-oriented System Architecture Elena Meshkova GRK 643: Software for Mobile Communication Systems Department of Wireless Networks, RWTH Aachen University The need for information fusion across communication layers for cross-layer performance optimization is particularly important in multihop wireless ad hoc and mesh networks. There is an emerging consensus in the networking research community that specifying a universal static stack of network protocols for heterogeneous transmission environments is a very difficult, if not possible at all, task. Instead, numerous research projects are defined towards developing an autonomic networking or cognitive resource management functionality. In these approaches atomic networking functionalities with flexible inter-layer interfaces are actively combined by the network nodes at runtime into a protocol suite suitable for a specific communication environment. We are developing USSA, a Unified Service-oriented System Architecture design. USSA employs a unified way to describe and access information which is used both internally inside an operating system for composition of actual network services and externally for service access by an end user in the service-oriented technology sense. We utilize and extend the experiences from computer and sensor network domains for decomposing complex network services, i.e. routing, into atomic functionalities and present a methodology for dynamic re-configuration of these services. We do not propose a unified structure for a communication protocol stack. A specific set of networking functionalities is combined into a suite that we call an amorphous stack for a particular application level service. The functional content of the amorphous stack can be altered at run-time without physically upgrading the existing software. The USSA architecture consists of two parts. The static part is invoked during the precompilation phase. It aims to decrease the processing power and memory usage on the devices during the run time. At this stage we use the ontology to check the compatibility of the device components, as well as of the whole network. We also give recommend on the device services composition based on the user requirements. The dynamic part of USSA enables the middleware and cross layer mechanisms that allow run-time performance optimization of the device and the network. Currently our solution is applicable to wireless sensor networks (WSNs). Later we plan to extend our design to the area of the cognitive radios, especially in the relation to Cognitive Resource Manager developed by RWTH Aachen University. This work forms a part of the doctoral dissertation research, in which we are studying and developing artificial intelligence and ontology based methods to optimize cognitive wireless networks. We will also aim to provide flexible plug-and-play system architecture for cognitive radios. 28

31 GRK 643: Software für mobile Kommunikationssysteme Graph-based Reengineering of Telecommunication Systems Christof Mosler GRK 643: Software for Mobile Communication Systems Department of Computer Science 3 (Software Engineering) RWTH Aachen University Eighty percent of programming resources are allocated to maintaining and reengineering existing code. There exist many approaches concerning reengineering of legacy systems, but the majority of these approaches is dealing with systems in the field of business applications. The E-CARES project concerns understanding and restructuring of complex legacy systems from the telecommunication domain. Such systems are real-time embedded systems using the signaling paradigm, thus they pose additional requirements, e.g. regarding the system s performance. Telecommunication experts often think in terms of state machines and protocols which makes it even more difficult to adapt the already achieved results. The E-CARES research project is a cooperation between Ericsson Eurolab Deutschland GmbH (EED) and the Department of Computer Science 3, RWTH Aachen. E-CARES is an acronym for Ericsson Communication ARchitecture for Embedded Systems. The cooperation aims to develop methods, concepts, and tools to support the process of understanding and restructuring complex legacy telecommunication systems. The current system under study is Ericsson s AXE10, a mobile-service switching center (MSC) comprising more than ten million lines of code written in PLEX (Programming Language for EXchanges). The first phase of the E-CARES reengineering project concerned the reverse engineering of telecommunication software. The project focused on the detection, extraction, and visualization of information on the system s structure and behavior. Extraction from static information (code) and from runtime information (traces) was regarded. The information is used to build a so-called system structure graph. This graph contains information on a system s decomposition into different units in different granularity. Current work concerns restructuring of legacy telecommunication systems including their re-design and re-implementation. The aim is to extend the reverse engineering tool to a functional reengineering tool, allowing the engineers to interactively modify and improve the software. Several clustering-based algorithms improving the software architecture and performance have been successfully implemented. As telecommunication systems are often planned and modeled using state machines and protocols, it seems reasonable to analyze the systems also on the abstract representation level of state machines. For this reason, we extract the state machines from the source code, compare them with the original specification documents, and take them into account when suggesting appropriate reengineering steps. 29

32 Dagstuhl-Meeting 2007 Communication Protocols for Wireless Sensor Networks Kittisak Ormsup GRK 643: Software for Mobile Communication Systems Department of Computer Science, Informatik 4, RWTH Aachen University Abstract Challanges in sensor network design nowadays: Network Lifetime No real test beds can operate longer than 4-9 months. Existing routing protocols for wireless sensor networks don t perform very well in real sensor applications due to poor scalability, poor dynamic and excessive usage of node resources (which causes a short network lifetime). I therefore concentrate on the question: Is it possible to run an ad hoc wireless sensor network for more than 1 year, using only low-cost motes with 2 AA-batteries? Cross-layer Design Most of the research works has been done either only on MAC-layer (with static routing) or on the routing-layer (using MAC). The results will be completely different using routing protocols for sensor networks or another MAC designed for sensor networks. This project tries to cope with this problems in 3 steps: 1. Routing Protocol - Explore an approach which needs less resources and has a high dynamic, based on Ant Algorithm, where forward and backward agents (packets) are periodically sent between source and destination nodes, thereby leaving pheromone values on the paths, with the help of this values and some calculations, best path can be found without large overheads and maintenance efforts. The goal of the project is to implement the routing protocol on ns-2 network simulator, based on this approach, find the equation that best calculates the path, solve the scalability problem, compare the protocol to existing ones 2. MAC-Protocol - Find the MAC-protocol that best suit this routing protocol 3. Test - Implement the whole protocols on a real sensor test bed Results shown that the ant-based routing protocol outperforms Directed Diffusion. Now it s at the stage of finding the appropriate MAC-protocols. 30

33 GRK 643: Software für mobile Kommunikationssysteme Dynamic Composition of ehome Software Daniel Retkowitz GRK 643: Software for Mobile Communication Systems Department of Computer Science 3 (Software Engineering), RWTH Aachen University In today s homes, a lot of appliances are available, but in general, these appliances are not interconnected. To facilitate comprehensive services based on multiple appliances, that offer complex functionalities to the users, it is necessary to develop flexible and adaptive software. To achieve this goal at low overall costs, ehome software has to be built from standard components, that are automatically composed and adapted to the user s needs and the individual home environment. The customization of the ehome software is achieved by component composition in a process of specification, configuration, and deployment. We call this process the SCD-process. An automatic support for the SCD-process is one of the key issues for the application of ehome services. In previous research of our group, support for an automated static configuration has been developed. In this approach, a specification of services, devices, and the ehome environment, and a selection of the desired top-level services is needed. On the basis of this information, a configuration is generated automatically. Up to now, no adapting changes of this configuration are possible later on at runtime. Services are only deployed once in the beginning. This is a strong restriction of the current process, particularly with regard to user mobility. In my research, an incremental approach is pursued to cope with the dynamics of ehome environments. Whenever changes occur in the ehome environment, the SCD-process has to be re-executed to adapt the software to the new situation. Any change of the user s location or desires or of available devices implies corresponding changes in the specification and hence also the configuration and the deployment. The existing configuration mechanism will be extended to react on specification changes and to support an automated and flexible reconfiguration. For the deployment, capabilities to add services to the runtime environment, to remove others, and to change the execution states of services are needed. To achieve these goals, the present implementation of the SCD-process and the techniques for automatic service composition will be extended. In certain circumstances, a service cannot provide its full functionality, e.g. if some base services are not available. It may still be possible for the service to provide a reasonable subset of its functionality. Also, it may be possible to replace some service by some other service with similar capabilities. New mechanisms are needed to offer flexibility and adaptability in service composition. This way, less user interaction will be required, since fewer composition mismatches will occur, that require manual resolution. To enable such mechanisms, we develop new concepts for semantically annotated service specifications and the respective composition algorithm. 31

34 Dagstuhl-Meeting 2007 Real-time Radio Wave Propagation Arne Schmitz GRK 643: Software for Mobile Communication Systems Computer Graphics Group, RWTH Aachen University The simulation of the propagation of radio waves is important in many aspects of mobile communications. Both in cellular phone networks and in personal wireless networks the knowledge of radio propagation behavior is essential. Therefore we have implemented a radio wave propagation algorithm based on photon tracing. However computation could not be done in real-time, which would be necessary for interactive analysis of radio networks, antenna placement or usage in a packet-level network simulator. Thus we developed a solution to use the radio wave propagation simulation at interactive rates. The input of the algorithm is the scene geometry and a sampling of possible radio transmitter positions in the scene. We then compute a simulation of the radio waves for each of the possible positions. This step will take a few minutes but is only necessary once for each scene. The result will be a set of up to a few hundred 3D images containing the field strength for each transmitter position. These results can now be viewed with our interactive transmitter placement tool which allows moving the transmitter around. This is achieved by interpolating the results of the precomputed simulations. This is done using programmable graphics hardware allowing for up to 150 frames per (b) second in outdoor (a) and indoor (b) simulations. Note that all the simulations run in 3D, as opposed to other works, which only do 2D simulations on maps. The fact that the geometry can be arbitrary is especially important for indoor simulations, where wireless connections between different floors have to be simulated. Also note that complex shadowing effects due to buildings or walls obstructing the transmitter are simulated. This same approach is also used to get more accurate results when simulating wireless networks in the ns-2 simulation toolkit. We allowed this widely used network simulator to use the precomputed radio wave simulations in order to use a more accurate physical layer model. Before this, radio wave simulations based on ray-tracing techniques were simply too slow to be used in network simulators and less accurate models were used instead. (a) 32

p^db=`oj===pìééçêíáåñçêã~íáçå= Error: "Could not connect to the SQL Server Instance" or "Failed to open a connection to the database." When you attempt to launch ACT! by Sage or ACT by Sage Premium for

This press release is approved for publication. Press Release Chemnitz, February 6 th, 2014 Customer-specific software for autonomous driving and driver assistance (ADAS) With the new product line Baselabs

Support Technologies based on Bi-Modal Network Analysis H. Agenda 1. Network analysis short introduction 2. Supporting the development of virtual organizations 3. Supporting the development of compentences

Prediction Market, 28th July 2012 Information and Instructions S. 1 Welcome, and thanks for your participation Sensational prices are waiting for you 1000 Euro in amazon vouchers: The winner has the chance

Inequality Utilitarian and Capabilities Perspectives (and what they may imply for public health) 1 Utilitarian Perspectives on Inequality 2 Inequalities matter most in terms of their impact onthelivesthatpeopleseektoliveandthethings,

Diss. ETH No. 12075 Group and Session Management for Collaborative Applications A dissertation submitted to the SWISS FEDERAL INSTITUTE OF TECHNOLOGY ZÜRICH for the degree of Doctor of Technical Seiences

UNIVERSITÄT JOHANNES KEPLER LINZ JKU Technisch-Naturwissenschaftliche Fakultät Working Sets for the Principle of Least Privilege in Role Based Access Control (RBAC) and Desktop Operating Systems DISSERTATION

International Week 2015 The poetry of school. The pedagogy of transfers and transitions at the Lower Austrian University College of Teacher Education(PH NÖ) Andreas Bieringer In M. Bernard s class, school

Exercise (Part II) Notes: The exercise is based on Microsoft Dynamics CRM Online. For all screenshots: Copyright Microsoft Corporation. The sign ## is you personal number to be used in all exercises. All

Version: 00; Status: E Seite: 1/6 This document is drawn to show the functions of the project portal developed by Ingenics AG. To use the portal enter the following URL in your Browser: https://projectportal.ingenics.de

Exercise (Part XI) Notes: The exercise is based on Microsoft Dynamics CRM Online. For all screenshots: Copyright Microsoft Corporation. The sign ## is you personal number to be used in all exercises. All

Possible Solutions for Development of Multilevel Pension System in the Republic of Azerbaijan by Prof. Dr. Heinz-Dietrich Steinmeyer Introduction Multi-level pension systems Different approaches Different

The Single Point Entry Computer for the Dry End The master computer system was developed to optimize the production process of a corrugator. All entries are made at the master computer thus error sources

USBASIC SAFETY IN NUMBERS #1.Current Normalisation Ropes Courses and Ropes Course Elements can conform to one or more of the following European Norms: -EN 362 Carabiner Norm -EN 795B Connector Norm -EN

Virtual PBX and SMS-Server Software solutions for more mobility and comfort * The software is delivered by e-mail and does not include the boxes 1 2007 com.sat GmbH Kommunikationssysteme Schwetzinger Str.

Kongsberg Automotive GmbH Vehicle Industry supplier Kongsberg Automotive has its HQ in Hallbergmoos, 40 locations worldwide and more than 10.000 employees. We provide world class products to the global

IoT Scopes and Criticisms Rajkumar K Kulandaivelu S 1 What is IoT? Interconnection of multiple devices over internet medium 2 IoT Scope IoT brings lots of scope for development of applications that are

SAP PPM Enhanced Field and Tab Control A PPM Consulting Solution Public Enhanced Field and Tab Control Enhanced Field and Tab Control gives you the opportunity to control your fields of items and decision

An Introduction to Monetary Theory Rudolf Peto 0 Copyright 2013 by Prof. Rudolf Peto, Bielefeld (Germany), www.peto-online.net 1 2 Preface This book is mainly a translation of the theoretical part of my

DISS. ETH NO. 18143 Strategies for Random Contract-Based Testing A dissertation submitted to ETH ZURICH for the degree of Doctor of Sciences presented by ILINCA CIUPA Dipl. Eng., Technical University of

Abstract The thesis on hand deals with customer satisfaction at the example of a building subcontractor. Due to the problems in the building branch, it is nowadays necessary to act customer oriented. Customer

Delivering services in a user-focussed way - The new DFN-CERT Portal - 29th TF-CSIRT Meeting in Hamburg 25. January 2010 Marcus Pattloch (cert@dfn.de) How do we deal with the ever growing workload? 29th

Service Design Dirk Hemmerden - Appseleration GmbH An increasing number of customers is tied in a mobile eco-system Hardware Advertising Software Devices Operating System Apps and App Stores Payment and