People have long dreamt of the paperless office. Saving costs for the storage of many different document versions, finding and managing documents faster and exchanging everything digitally is the wish of every company. Yet, especially when it comes to storing documents that require the written form and whose evidential value must be preserved over long periods of time, little to no solutions can be found, aside from very broad and unpractical ones.
In addition to this, there currently is no product on the market to create, edit, approve and manage procedure descriptions on a centrally accessible system, which does not require paper-copies to be stored.

With this thesis an attempt to fill the previously described void is made. The problem is described in detail and a possible solution is proposed for which basic knowledge and a legal base are researched. From this groundwork and scenarios encountered during daily work, requirements are derived that are later used to evaluate current solutions and possible components for the future solution. Considering the requirements, an extensive and secure concept was developed which has also been implemented on a prototype basis. After finishing the implementation, the system was evaluated regarding security aspects and the earlier defined requirements. All mandatory requirements were fulfilled, allowing for the data protection commissioner and the managers of services to create, edit, search, sort and manage procedure descriptions while ensuring the authenticity, confidentiality and integrity of documents. Finally, possible future paths are described which could be taken to further continue with and improve the developed system.

Denial of Service attacks are a threat to computer networks. One variation of them which is even more dangerous are Amplification attacks. Although this attack type is researched well, including multiple proposals to overcome the problem, new attack vectors arise frequently. There are multiple approaches using different detection methods on the victim side. On the amplifier side, where the problem has to be dealt with to reduce the impact of such attacks, surveys present new affected protocols frequently. However, the attacks adapt to the new situation dynamically and abuse other susceptible faults. Therefore, we research the visual approach to gain knowledge about the attacks on basis of an existing attack detection approach. Using the output data of the detection software, we propose software that generates a graphical representation. This can be used to evaluate the attacks within the amplifiers network to immediately receive detailed information about them.

Software-Defined Networking (SDN) is currently a much discussed topic, as it promises to free network operators from the proprietary and decentralised restrictions imposed by legacy networks. The new approach to network architecture shifts the configuration and routing mechanisms from routers and switches to a central controller, a single programmable software device which is able to view and command the entirety of the network. The key player in this evolution is the OpenFlow protocol, propagated by the Open Networking Foundation (ONF).
However, in the process of growing popularity and surge of interest, the security aspect of SDN has been neglected. This circumstance may become a major hindrance in the acceptance and adoption of the new paradigm.

The goal of this thesis is to compile research on security regarding both vulnerabilities and opportunities and to infer requirements for a secure software-defined network.
The first section aims to provide a thorough background of SDN, its architecture and main components. The general design is then inspected for flaws by analysing and identifying several security vulnerabilities and problematic trends in the attack fields of Spoofing, Tampering, Repudiation, Denial of Service, and Elevation of Privilege. The threats are summarised and visualised in attack tree models.
The results of this security assessment reveal that the software-defined network based on current standards and popular control software can not be considered secure. Consequently, the second section of the thesis utilises and augments contemporary approaches to enhance the security of the OpenFlow protocol as well as the general SDN infrastructure.
The security principles and concepts demonstrate that the design of SDN is ultimately capable of preventing many of the identified vulnerabilities and even selectively enhances security compared to legacy infrastructure.
Nevertheless, due to the software-based and virtual nature of SDN, the network may be exposed to the constant looming threat of software bugs and exploits that may facilitate Denial of Service and Elevation of Privilege in the central controller. Furthermore, the multitude of different required solutions may heavily impact the performance and latency of the control plane or introduce new previously unconsidered vulnerabilities.

The Leibniz Supercomputing Centre (LRZ) is the provider of Munich's largest research network, the Munich Scientific Network (MWN). As the MWN is not a supervised company network, but a peripherally organized university network, port scans are required to get an overview about activities and the difference between actual and desired state within the network. Due to long scan times with the port scanner Nmap, used thus far for scanning the MWN, this thesis gathers new port scanners like Masscan and ZMap, compares and evaluates them and comes to the solution, that Masscan is currently best suited for scanning the entire MWN.

Additionally, since October 2014, a new SSL/TLS security breach named POODLE (Padding Oracle on Downgraded Legacy Encryption) is known and the most secure way to prevent it, is to disable SSL 3.0 and older versions. This is the second task this thesis fulfills: providing a fast solution regarding SSL 3.0 fallback detection.

To fulfill this goal, a new scanning evaluation tool is developed, introduced and used to scan the MWN with 500,000 hosts is scanned. Interesting information is saved containing IPs, open ports, services, OSes and vulnerabilities. More than 2000 hosts were detected with SSL 3.0 enabled. Furthermore a part of the MWN was detected to be unable to withstand a packet rate of more than 200 kpps and three hosts were identified, with almost half of all possible ports open.

Administrators often require access to privileged and shared accounts to manage systems, services, and applications. Subsequently, the unlimited power gives them unrestricted access to sensitive data and to important components of the network infrastructure.

In order to prevent from insider and outsider threats, the privileged user accounts need to be securely shared between authorized administrators and the access to these accounts needs to be monitored and audited.

This thesis analysis the threats that arise from privileged user accounts in shared environments and brings up countermeasures that need to be taken into account in a privileged user password management solution.

Besides the security aspects, generic as well as specific requirements of two organizations are presented. For that purpose, the status quo of the current password management as well as use cases at the Leibniz Supercomputing Centre (LRZ) and iC Consult GmbH, which is a system integrator specialized in identity and access management, were analyzed.

Based on that requirements catalog, three software products are evaluated and compared to each other. Moreover, the suitability of these products for the purpose of the LRZ and iC Consult is examined.

The thesis then proposes a generic architecture that addresses the management of privileged user passwords as well as the fine-grained control of access rights that administrators require to perform actions as privileged user. After that, a demonstrator illustrates some of the use cases that have been brought up during the requirements research by implementing the essential parts of the proposed architecture.

In conclusion, the thesis shows that the management of privileged user passwords not only requires a centralized component that securely controls which administrator is authorized to access what privileged user passwords. Moreover, it is necessary to consider special use cases that normal password managers do not have to consider, for example, an emergency access that allows administrators to use a privileged account which they usually are not authorized to use. Additionally, the auditing of shared privileged accounts needs to be addressed and the unrestricted access rights need to be taken into account.

The Leibniz Supercomputing Center (LRZ) is mainly responsible for providing computational resources and operating the network backbone of the Munich Scientific Network (MWN). Since abuse complaints received from external networks are forwarded by the LRZ to the responsible local administrators, it is desired to provide the administrators with the ability to perform regular and on-demand network/security scans of their networks to reduce risks and minimize the administrative overhead performed by the LRZ. The goal is to design and implement a centralized scanning framework that can issue on-demand and periodic scans and implements common scan types such as port scanning and version detection while being easily extensible with new scan types. To evaluate the framework, we performed a large-scale SSH scan of the MWN and analyzed the results for some common SSH weaknesses such as duplicated and factorable cryptographic keys.

While the novelty of Software-Defined Networking (SDN) - the separation of network control and data planes - is appealing and simple enough to foster massive vendor support, the resulting impact on the security of communication networks infrastructures and their management may be tremendous. The paradigm change affects the entire networking architecture. It involves new IP-based management communication protocols, and introduces newly engineered, potentially immature and vulnerable implementations in both network components and SDN controllers. In this paper, the well-known STRIDE threat model is applied to the generic SDN concepts as a basis for the design of a secure SDN architecture. The key elements are presented in detail along with a discussion of potentially fundamental security flaws in the current SDN concepts.

What operating systems are installed on your network, and what software is running on them? Questions like these are often posed in IT departments ? especially if users are operating their own shadow IT or when documentation, automation, and software distribution need some care and attention. However, you have good reasons to ask these questions: Attackers are also interested in your systems.

Layer-4 port knocking is a seasoned mechanism to hide network services in order to make them available only to authorized users in an on-demand manner. A typical legitimate use case is to enable world-wide SSH-based system administration access without allowing the whole Internet to directly connect to the SSH service on the machine in order to minimize attack vectors. However, port knocking can also be used by attackers who successfully have compromised a system and installed some kind of backdoor; in this case, port knocking is used to evade detection by the legitimate system owners for as long as possible. This article presents a novel approach to flow-analysis-based detection of hidden backdoors and port knocking sequences; it correlates layer-4 port scans with flow records, which are, e.g., based on NetFlow records, and analyses the results in order to perform a purely network-based backdoor detection without the need to retrieve additional information from each individual machine connected to the local network. As the flow-record-based approach has inherent limitations, such as high latency and therefore lacking real-time capability, the potential of software-defined networking applications (SDN controller apps) for this purpose is discussed. The presented approach has been implemented as a prototype, which was thoroughly tested in a lab environment; the results of those results and identified issues as well as ways to improve the approach are discussed. Finally, an outlook to real-world application is given.

The convergence of different types of networks, such as for telecommunication and data transfer, on the hardware layer has an obvious impact on both network management and security management, which especially affects network service providers as well as data centers. This paper argues that also methods, algorithms, and tools from both research domains, network management and security management, should systematically be reviewed for synergies. It first analyzes the current state of the art in both domains and identifies gap areas that require further investigation. Then, the Customer Network Management for the X-WiN (WebCNM) network management tool, which started as a research prototype in the pan-European research and education network, GÉANT, is presented along with selected extensions that were designed and implemented to integrate security management functionality. Several security event visualization options and their use within the European industry-focused Safe And Secure European Routing (SASER) project are discussed.

The bottleneck of security monitoring is often the huge amount of signatures, which are useless but consume computation power and time. Therefore, the signatures have to be set more accurate for the systems, which should be protected. In this paper, a new approach is presented, which is able to detect more efficient the service and software running on a server. This knowledge helps to select the relevant signatures for the security monitoring, which leads to a more efficient usage of the system's resources.

In most higher education institutions (HEIs), IT systems are still operated in a decentralized manner at least partially: Although a central data center or IT department typically provides basic IT services such as email servers, many faculties and chairs operate, for example, web servers and file servers on their own. As there often is no campus-wide asset management or configuration management database available, this results in a lack of a big picture, i.e., nobody really knows who is operating which IT services for whom in total throughout the HEI. Problems arise when network services, i.e., servers that can be accessed, e.g., via the Internet, are compromised, either by external attackers or by insider threats. While many faculties succeed in basically setting up their own network service machines, e.g., a web server including a database server for a learning management system, only few of them are aware of typical information security issues and know how to harden their server machine installations to protect them against the usual attacks. To remedy this partial lack of know-how, an increasing number of HEIs set up a central security team tasked with monitoring, analyzing, and continuously improving information security across the campus.

In this article, we present an open source tool, which we developed to facilitate asset management and risk analysis of network services in research and education networks: Dr. Portscan is a delta-reporting tool for network port scans, which are often used for active-probing-based asset discovery, allow for a basic risk assessment, and can be used as a basis for fully-fledged vulnerability management. Dr. Portscan orchestrates the execution of an arbitrary number of port scans, e.g., based on the well-known nmap tool, from various locations inside and outside an HEIs campus network, aggregates and consolidates these port scan results, analyzes which changes have been made compared to the previous state, and can alert the security staff about new or unknown network services on the campus that need more detailed manual investigation.

Dr. Portscan cannot only be used by each HEI individually, but based on agreements between HEIs, security teams at multiple HEIs can collaborate to provide each other with external perspectives on their network services. Dr. Portscan is meanwhile also used in the pan-European CELTIC project SASER  Safe and Secure European Routing  to analyze the basic security properties of active network components, especially routers and network gateways, of the participating internet service providers.

For many years Distributed Denial-of-Service attacks have been known to be a threat to Internet services. Recently a configuration flaw in NTP daemons led to attacks with traffic rates of several hundred Gbit/s. For those attacks a third party, the amplifier, is used to significantly increase the volume of traffic reflected to the victim. Recent research revealed more UDP-based protocols that are vulnerable to amplification attacks. Detecting such attacks from an abused amplifier network's point of view has only rarely been investigated.

In this work we identify novel properties which characterize amplification attacks and allow to identify the illegitimate use of arbitrary services.

Their suitability for amplification attack detection is evaluated in large high-speed research networks. We prove that our approach is fully capable of detecting attacks that were already seen in the wild as well as capable of detecting attacks we conducted ourselves exploiting newly discovered vulnerabilities.

Handling log messages securely, for example, on servers or embedded devices, has often relied on cryptographic messages authentication codes (MACs) to ensure log file integrity: Any modification or deletion of a log entry will invalidate the MAC, making the tampering evident. However, organizational security requirements regarding log files have changed significantly over the decades. For example, European privacy and personal data protection laws mandate that certain information, such as IP (internet protocol) addresses, must only be stored for a certain retention period, typically seven days. Traditional log file security measures, however, do not support the delayed deletion of partial log message information for such compliance reasons. This article presents SLOPPI (secure logging with privacy protection and integrity), a three-tiered log management framework with focus on integrity management and compliance as well as optional support for encryption-based confidentiality of log messages.

Data centers of universities and IT departments of smaller higher education institutions provide dozens of IT services such as email, web hosting, e-learning, and file storage. The number of server machines and appliances that need to be operated often reach three-digit numbers depending on the number of services, users, and high-availability setups. Many services can be used via the Internet to improve usability. As one of the consequences, many servers are subject to Internet-based attacks. Typically, security mechanisms such as firewalls and intrusion prevention systems are used to counter these attacks. However, in practice still a lot of server machines get compromised, e.g., due to vulnerabilities in server software that is not patched fast enough, or due to improper configuration of the software running on these machines.

In an ideal world, there would be enough IT personnel to operate all these IT services, and each IT administrator would also be an IT security specialist who knows exactly how to make his or her own servers almost perfectly secure. In reality, however, often a very small IT staff needs to run more servers than can easily be handled, and IT services such as groupware or e-learning systems have become extremely complex regarding their core functionality. Consequently, administrators only have diminishing resources, i.e., time and know-how, to properly secure their IT services. Specialization then typically leads to the foundation of dedicated security teams, such as CERTs (computer emergency response teams) and CSIRTs (computer security incident response teams). While those security teams consist of security experts, their primary problem is a lack of in-depth-knowledge about all those IT services and their specific configuration. In order to facilitate the security teams efficient handling of, for example, security incidents in an e-learning service, knowledge transfer from the e-learning administrator to the security team about the specific setup must be fostered, and accurate knowledge must be available on-demand, for example, if a security incident happens while the service administrator is on holidays.

In theory, each IT service should be properly documented along with all of its operational and security-specific properties, and this documentation should always be kept up-to-date. In reality, most IT administrators have no time to write documentation, dislike this task, and often do not even know what should be documented in a service-specific IT security concept. Therefore, many security incidents are handled in a patch-on-demand manner: Once a service has been compromised by an attacker, it is set up again, e.g., from a clean backup, and minimum configuration changes are applied to prevent the same type of attack from being successful again. While this approach is somewhat pragmatic, it obviously cannot be considered as a good and sustainable solution.

We present a template-based approach towards the documentation and management of IT security concepts tailored to the demands of real-world IT service operation in higher education institutions. Our documentation template is intended to be filled in easily, provides a uniform document structure across many types of IT services, encourages IT administrators to think about IT security target-oriented, and supplies security teams with the information they require for security incident handling. Its contents are based on security standards and good practices, such as ISO/IEC 27001, ITIL v3, and the IT base protection catalogues by the German Federal Office for Information Security. We are working on a web-based management frontend that makes it easy to initially write, update, access, and utilize the security concept documents, which are stored in a repository that also serves as a foundation for an inter-organizational exchange of IT security concepts.

Secure log file management on, for example, Linux servers typically uses cryptographic message authentication codes (MACs) to ensure the log file's integrity: If an attacker modifies or deletes a log entry, the MAC no longer matches the log file content. However, some privacy and data protection laws, for example in Germany, require the deletion or anonymization of log entries with personal data after a retention period of seven days. Such changes therefore do not constitute an attack. Previous work regarding secure logging does not support this use case adequately. A new log management approach with a focus on both the integrity and the compliance of the resulting log files with additional support for encryption-based confidentiality is presented and discussed.

Time and again, situations arise in which admins need access to a system they do not otherwise manage. But, do you want to hand over responsibility for password management to a centralized software? What capabilities must that software have?

This article discusses two closely related topics of security management with a special focus on the specifics of higher education institutions (HEIs) and their data centers:

- Log files are invaluable for administrators, e.g., for monitoring, fault management, and forensics. However, log entries often include personal data, such as usernames, email addresses, or IP addresses, and thus must honor data protection and privacy laws as well as other regulations. We discuss log file management compliance for HEIs in the European Union and present the approach taken at the Leibniz Supercomputing Centre in Munich, Germany.

-Insider threats, i.e., security incidents that are not caused by attackers over the Internet or regular users, but by an organizations own employees, are often neglected or played down, especially in environments such as HEIs, which can be characterized by the freedom of research and education. However, given the high employee turnover that is also typical for HEIs, the risk of having at least a few black sheep among hundreds of student assistants, fixed-term employment contracts, and other colleagues that are not paid too well must be given some consideration.

In order to suggest a practical solution, we first analyze ISO/IEC 27001, the best-known standard for information security management systems, regarding the security controls it specifies with respect to log files and insider threats. We discuss the limits of those security controls when applying them to HEI data centers in the real world.

We then analyze technical approaches to log file management, especially the differences between centralized and decentralized logging architectures, and elaborate on log file management restrictions that result from compliance with data protection and privacy laws. A key issue here is that log file entries that include personal data must be deleted or anonymized after a retention time of seven days.

As a pragmatic approach to improve security management at HEIs, we then present LoginIDS, our open source implementation of a log file analysis algorithm that helps to detect several types of insider threats and significantly reduces the workload required for scanning log files for suspicious entries manually. We discuss its simple algorithm, which resembles an old-school administrators gut feelings in combination with a privacy-enhancing long-term memory, and outline LoginIDS features, internal workflows, and output.

As we are very interested in other HEIs and data centers using this tool to get a better view on the big picture of insider threats at HEIs and to further improve its functionality, we then summarize its installation and basic configuration. We also describe our own, still preliminary experiences after several months of LoginIDS usage and outline several ideas for improvements and additional features that we will implement as part of our future work.

Only effective security management grants the long-lasting operation of information processing systems. The German GIDS project, funded by the German Federal Ministry of Education and Research, develops a security management system in terms of an Intrusion Detection System (IDS) and aims at deploying a Grid-based IDS for the German national D-Grid infrastructure as a productive service. GIDS provides a global view on D-Grid-wide security incidents by aggregating and combining locally deployed security systems and mechanisms.

Inherently tied to core characteristics of Grid infrastructures, new problems and challenges arise in comparison to conventional distributed systems, e.g., the support for the heterogeneity of the resource providers and for the highly dynamic virtual organizations (VO). The goal of the GIDS project is, on the one hand, to enhance the identification and detection of potential attacks and threats, and on the other hand it adopts early warning and reporting mechanisms to the Grid. The focus is to develop a concept for GIDS and to deploy a production-ready system for the D-Grid by the end of the project in summer 2012. The main idea of GIDS is to cooperatively federate and exchange attack data among the participating resource providers in a data privacy compliant and lawful manner, while obeying individual security information policies found at the participating organizations. Most of the solution parts that are developed in GIDS can also be used for securing international collaboration, e.g., between higher education institutions in general.

IDS' data reporting on attacks on the Grid are collected and processed in the context of GIDS. As a matter of fact, parts of this data may also include personal data or even allow to relate the data to a certain single person, which in turn lets the project face the applicable German data protection act (BDSG). The BDSG regulates the collection, use, and distribution of personal data. Besides, this data may also reveal critical security breaches one of the partners may potentially be vulnerable to. On the other hand, it is important for the cooperative analysis of the collected data to carry as much unchanged information together as possible. Thus, the main challenge is to find a suitable way to deal with the partially contradictory requirements to detect security incidents while granting the lawful handling of sensitive data. In this context, the project-specific concept for data privacy protection leverages both the possibilities granted by the BDSG and the technical measures to process data with need of special treatment and protection.

For the project's success, it is of major importance to guarantee every participating site its autonomy. In turn, this implies that no order can be issued about which components are to be used or policies to be enforced. Due to this fact, the choice of Prelude (http://git.lrz.de/prelude) as a core component had been made. In its nature as an open source SIEM-solution, Prelude is natively compatible to a variety of security systems and is on the other hand easy to adapt to not yet supported ones. Prelude may serve the resource providers as simple data collecting entity or even deliver local IDS functionality.

Each data that is collected has to be manipulated according to the concept of data privacy protection as it is shared with the other GIDS resource providers. On the one hand this manipulation includes filtering any data in accordance with local information sharing policies. On the other hand it consists of an aggregation and correlation of the data in order to guarantee the system's scalability. As a final step, any personal data has to be anonymized and/or pseudonymized in order to enforce legal regulations. The used data exchange format IDMEF supports the D-Grid prevailing heterogeneity and autonomy of resource providers, because it is used by a large number of existing intrusion detection systems already and can be easily integrated into proprietary or new systems.

Any data that passed the afore-mentioned procedure is published among the GIDS partners using a bus-like communication infrastructure. The bus is based on OpenVPN (http://openvpn.net) and thus guarantees basic security features, i.e., data integrity, confidentiality (among the project partners), and authenticity. By means of global correlation of locally collected data, global intrusion detection can take place in order to improve preciseness and performance to security alerts, especially in regard to distributed attacks.

To support Virtual Organizations (VOs), GIDS developed a customized user portal, which will be deployed as an additional service in the DFN-CERT infrastructure. Using the newly built service, VO managers and Grid users are capable of retrieving information about the current security status of each resource in the Grid they are authorized to use. Regular Grid certificates grant authorization and authentication for accessing the user portal.

The figure below presents one part of the poster in which we present the three-tier architecture of GIDS: The Grid sites (at the bottom) provide Grid resources and run their local IDS installations. They are connected by means of the multicast GIDS message bus, which has been implemented using OpenVPN; one of the sites connected to this message bus is DFN-CERT as central GIDS portal operator.

This poster presents the Grid-site-specific GIDS architecture in detail, several inter-organizational use cases, our preliminary real-world experiences, and food for thought regarding the use of GIDS results on a pan-European scale.

This article discusses two closely related topics of security management with a special focus on the specifics of higher education institutions (HEIs) and their data centers:

- Log files are invaluable for administrators, e.g., for monitoring, fault management, and forensics. However, log entries often include personal data, such as usernames, email addresses, or IP addresses, and thus must honor data protection and privacy laws as well as other regulations. We discuss log file management compliance for HEIs in the European Union and present the approach taken at the Leibniz Supercomputing Centre in Munich, Germany.

-Insider threats, i.e., security incidents that are not caused by attackers over the Internet or regular users, but by an organizations own employees, are often neglected or played down, especially in environments such as HEIs, which can be characterized by the freedom of research and education. However, given the high employee turnover that is also typical for HEIs, the risk of having at least a few black sheep among hundreds of student assistants, fixed-term employment contracts, and other colleagues that are not paid too well must be given some consideration.

In order to suggest a practical solution, we first analyze ISO/IEC 27001, the best-known standard for information security management systems, regarding the security controls it specifies with respect to log files and insider threats. We discuss the limits of those security controls when applying them to HEI data centers in the real world.

We then analyze technical approaches to log file management, especially the differences between centralized and decentralized logging architectures, and elaborate on log file management restrictions that result from compliance with data protection and privacy laws. A key issue here is that log file entries that include personal data must be deleted or anonymized after a retention time of seven days.

As a pragmatic approach to improve security management at HEIs, we then present LoginIDS, our open source implementation of a log file analysis algorithm that helps to detect several types of insider threats and significantly reduces the workload required for scanning log files for suspicious entries manually. We discuss its simple algorithm, which resembles an old-school administrators gut feelings in combination with a privacy-enhancing long-term memory, and outline LoginIDS features, internal workflows, and output.

As we are very interested in other HEIs and data centers using this tool to get a better view on the big picture of insider threats at HEIs and to further improve its functionality, we then summarize its installation and basic configuration. We also describe our own, still preliminary experiences after several months of LoginIDS usage and outline several ideas for improvements and additional features that we will implement as part of our future work.

Higher education institutions (HEIs) continuously must adapt their strategies in a broad range of topics: eScience has revolutionized computationally intensive science, e.g., by Grid computing; the international competition for the best students, lecturers, and researchers has intensified; and modern collaboration and learning management systems enable teamwork even over large distances. As a consequence, IT has turned into the primary key enabler for highly efficient administration, research, and education at HEIs of any size.

However, IT evolves fast and digital natives demand quick and flexible reactions to their dynamic IT requirements. Cloud services, such as Dropbox, and collaboration web services, such as Wikispaces, fascinate young and senior HEI members alike. Whenever a new cloud- or web-based tool becomes popular, users demand similar functionality from their HEIs IT as well. Therefore, the classic paradigm of HEI data centres with rather static service catalogues no longer holds. Instead, modern HEI data centres must be as open-minded to new technologies as their users are.

In this poster presentation, we show how classic as well as innovative HEI data centre services can be offered in a cloud-like fashion. We use the Munich Scientific Network as an example: The Leibniz Supercomputing Centre (LRZ) is the common IT service provider of the two Munich universities and of several other HEIs in the greater Munich area. IT service strategies are aligned between the LRZ and its HEI customers based on a business relationship management process. This tight cooperation results in IT architectures that allow each HEI to flexibly implement new IT services that fulfil their challenging requirements. By identifying common building blocks and by standardizing typically required technological components, a services spectrum has been created that can be perceived as a Hybrid Cloud (see (Knittl and Hommel, 2011)).

The services that are offered cover a broad range of infrastructure, platform, and software as a service (IaaS, PaaS, and SaaS). As an example, our poster shows how a typical faculty web site and a learning management system can be assembled from standardized XaaS services: Network access and NAS-based storage is offered as IaaS. Virtual machines and other services, e.g., the identity and access management system for role-based authorization control, is provided as PaaS. HEI-specific Typo3 and Moodle instances resemble SaaS. The cross-organizational processes and workflows that account for the service lifecycle, e.g., ordering, setup, operations, incident management, and support, have also been standardized to improve the service provider efficiency. The HEIs benefit from professionally managed services in all XaaS areas and from an evident reduction of costs compared to investing into hardware, software, and operational staff for these services themselves. Additional advantages are that quality-of-service-increasing infrastructure components, such as uninterruptible power supplies, service load balancers, and firewalls, are also provided and professionally managed by LRZ.

This deliverable summarizes the activities of the WP 2.5 partners of the SASER-SIEGFRIED project in the area of procedures for network control and operation.

The presented work focuses on three parts: Network planning and protection schema, the network control plane, as well as on the topic of network operation.

The first part investigates the planning process of energy-efficient optical multi-layer networks, as well as novel protection schemes for flexible optical 100G transponders.

The effects of flexible grid transponders towards the network control plane and the possibility of dynamically adapting the line rate are studied in the second part. Further, the optimal placement of control plane instances in software defined networks is investigated for the static, as well as for dynamic cases, when network conditions change.

In the final part, this work focuses on virtualized mobile network functions. There, it is elaborated on the implications of software-based mobile network gateways leveraging the features of SDN and NFV. It is further reported on the important aspect of network security and handling of security events.

All these parts focus on the success stories of the SASER-SIEGFRIED project and show that network control and operation in the next generation Internet is significantly improved.

This deliverable summarizes the activities of the WP 2.5 partners of the SASER-SIEGFRIED project in the area of procedures for network control and operation.

The presented work focuses on three parts: Network planning and protection schema, the network control plane, as well as on the topic of network operation.

The first part investigates the planning process of energy-efficient optical multi-layer networks, as well as novel protection schemes for flexible optical 100G transponders.

The effects of flexible grid transponders towards the network control plane and the possibility of dynamically adapting the line rate are studied in the second part. Further, the optimal placement of control plane instances in software defined networks is investigated for the static, as well as for dynamic cases, when network conditions change.

In the final part, this work focuses on virtualized mobile network functions. There, it is elaborated on the implications of software-based mobile network gateways leveraging the features of SDN and NFV. It is further reported on the important aspect of network security and handling of security events.

This deliverable summarizes the activities of the WP 2.5 partners of the SASER-SIEGFRIED project in the area of ?Network Control and Management Concepts?.

The average traffic growth of 30% per year brings continuously new challenges to the network operators and vendors. The capacity increase of WDM systems up to 8Tbit/s together with the FlexiGrid concept are seen as one of the most important evolutions of the recent years. The increased flexibility through FlexiGrid and the now possible elastic optical networks using rate-adaptive transponders however bring new challenges to the network control plane towards a better automation. Transparency of optical networks as another evolution allows to switch optical signals photonically, which avoids optical-toelectrical conversion. A reduction of the number of potentially compromised electrical components is also one of the big tasks in the SASER project, in order to provider better security.

The aforementioned evolutions of transport networks and the required high degree of automation result in an increased importance of network control and network management mechanisms.

In that regard, this deliverable reflects the work of the project partners in WP 2.5 as follows: IMT investigates resilience of the physical data plane in optical networks based on different protection schemes. Particular emphasis is on the flexible operation of modern transponders and the resulting potential of cost reduction by reducing the number of transponders or the spectral occupancy. For that, IMT analyzes the interaction between control and data plane. OT provides a GMPLS-controlled optical testbed which is close to operational conditions. Hence, the impact of specific mechanisms can be investigated in a realistic environment. The testbed combines data plane, control plane and management plane and thus allows the investigation of multilayer interactions. UniWue investigates the resilience of the control plane, in particular, taking into account new challenges arising with the introduction of SDN. Different resilience objectives are considered to physically distribute a logically centralized control plane. For that, the interaction between the control plane entities is of particular interest. Nokia aims at providing a resilient data plane in virtualized environments by optimizing the virtual network design phase. For that, they focus on the interaction between the virtualized data and control plane. Finally, the main focus of LRZ is on security of the data plane. Security, a typical network management task, is improved by adjusting the control plane. Hence, they investigate the interaction between management and control plane.