5 Abstract Millions of people value the Internet for the content and the applications it makes available. To cope with the increasing end-user demand for popular and often high volume content, e.g., high-definition video or online social networks, massively distributed Content Delivery Infrastructures (CDIs) have been deployed. However, a highly competitive market requires CDIs to constantly investigate new ways to reduce operational costs and improve delivery performance. Today, CDIs mainly suffer from limited agility in server deployment and are largely unaware of network conditions and precise end-user locations, information that improves the efficiency and performance of content delivery. While newly emerging architectures try to address these challenges, none so far considered collaboration, although ISPs have the information readily at hand. In this thesis, we assess the impact of collaboration on content delivery. We first evaluate the design and operating space of todays content delivery landscape and quantify possible benefits of collaboration by analyzing operational traces from an European Tier-1 ISP. We find that collaboration when assigning end-users to servers highly localizes CDI traffic and improves end-user performance. Moreover, we find significant path diversity which enables new mechanisms for traffic management. We propose two key enablers, namely in-network server allocation and informed user-server assignment, to facilitate CDI-ISP collaboration and present our system design, called NetPaaS (Network Platform as a Service), that realizes them. Innetwork server allocation offers agile server allocation close to the ISPs end-users leveraging virtualization technology and cloud style resources in the network. Informed user-server assignment enables ISPs to take network bottlenecks and precise end-user locations into account and to recommend the best possible candidate server for individual end-users to CDIs. Therefore, NetPaaS provides an additional degree of freedom to scale-up or shrink the CDI footprint on demand. To quantify the potential of collaboration with NetPaaS, we perform a first-of-itskind evaluation based on operational traces from the largest commercial CDI and an European Tier-1 ISP. Our findings reveal that dynamic server allocation based on accurate end-user locations and network conditions enables the CDI to better cope with increasing and highly volatile demand for content and improves the end-users performance. Moreover, recommendations from NetPaaS result in better utilization of existing server infrastructure and enables the ISP to better manage traffic flows inside its network. We conclude, that NetPaaS improves the performance and efficiency of content delivery architectures while potentially reducing the required capital investment and operational costs. Moreover, NetPaaS enables the ISP to achieve traffic engineering goals and therefore offers a true win-win situation to both CDIs and ISPs. 5

13 1 Introduction "Content is King" [33]: Predicted by Bill Gates in an essay from 1996, this quote has become the latest buzz in the Internet economy [7,43,49,98,137,151]. User demand for popular and often high volume applications such as high-definition video, music, cloud-gaming, online social networks, and online-gaming is phenomenal; unbroken since years [71,98,151] and still expected to grow [43]. For example, the demand for online entertainment and web browsing is contributing 70% of the peak downstream traffic in the United States [151]. Recent studies [71, 98, 137] find that today s Internet traffic is dominated by content delivered by a variety of Content Delivery Infrastructures (CDIs). Major CDIs include highly popular Video Service Providers (VSPs), such as YouTube [38], Netflix [2], One-Click Hosters (OCHs), such as RapidShare [21] or Dropbox [60], as well as Content Delivery Networks (CDNs), such as Akamai [57,128], Limelight [106], and other hyper-giants, such as Google [26], Yahoo! or Microsoft [117]. Other popular and traffic heavy services using CDIs include music downloads and streaming (e.g., Pandora, itunes, Spotify), cloud gaming (e.g., OnLive, PlayStation Netwok, Xbox One), Online Social Networks (OSNs, e.g., Facebook, Twitter or Google+), as well as online gaming (e.g., World of Warcraft, Farmville, Xbox Live). Gerber and Doverspike [71] report that a hand full of Content Delivery Infrastructures (CDIs) are responsible for more than half of the traffic in North America. Poese et al. report similar observations for traffic of an European Tier-1 carrier. Labovitz [49] reports that 50% of North American traffic originates from just 35 sites/services with only a handful CDIs serving the traffic. In a previous study Labovitz et al. [98] infer that more than 10% of the total Internet inter-domain traffic originates from 13

14 Chapter 1 Introduction Google, and Akamai claims to deliver more than 20% of the total Web traffic in the Internet [128]. Netflix, a company offering high definition video on-demand streaming, is responsible for a significant fraction of the traffic in North American ISPs during peak hour [151]. 1.1 Challenges in Content Delivery Even decades after the first commercial Content Delivery Infrastructures have been launched, the challenges content delivery still faces today are manifold. The question where to deploy additional server resources and how much is by no means easy to answer [66, 95]. The end-user demand for content is highly volatile, both spatial and temporal, and precisely locating end-users network positions turns out to be a tedious and error prone task [137, 141]. Novel and agile deployment strategies are required to further improve the CDIs performance and capacity as current approaches take up to multiple months and requires high capital investment. In the following, we discuss the challenges content delivery faces today in more detail. Infrastructure Deployment To cope with the continuously growing end-user demand for content, CDIs use and continue to deploy massively distributed server infrastructures that replicate and distribute popular content in many different locations on the Internet [7,102]. This implies that the deployment of server infrastructure is a challenge for CDIs. However, different players in the content delivery business have developed different strategies to handle the challenges in server deployment. As described by Tom Leigthon [102], these approaches include (1) centralized hosting, (2) datacenter based CDIs, (3) highly distributed cache-based CDIs, and (4) Peer-to-Peer (P2P) networks. The first approach may be sufficient for small services targeted at local audiences and can be extended by geographical disperse mirrors. This improves the end-users performance as a server is closer to some of the users, improves scalability due to more servers being able to serve more end-users, and enhances reliability through redundancy. But the complexity of managing capacities, content replication as well as the financial investment for infrastructure deployment, hard to predict and highly volatile traffic levels in combination with the inability to absorb sudden demand surges, often referred to as flashcrowds, have paved the way for approaches 2 and 3. Both offer increased scalability and reliability by offloading the delivery of content from the original server onto a larger network of caches which are shared by numerous services and operated by a third party, the CDI. Data center based CDIs leverage economy of scale over centralized hosting by operating a number of big data centers with thousands of server connected to hundreds of networks. While offering improved performance the gains are limited as the distance 14

15 1.1 Challenges in Content Delivery to end-users is still large according to any metric: the biggest 30 networks combined host roughly 50% of the end-users and the numbers decline very fast resulting in a long tail distribution of end-user over all the Internets networks to where a dedicated connection is economically infeasible for the CDI. As a result the traffic needs to cross many middle mile networks to reach a significant number of end-users even if the CDI connects to all large Tier-1 backbone networks. Another drawback of this architecture is the large network load that these datacenters impose on transit networks. Other CDIs try to avoid these issues by deploying highly distributed cache servers in many different networks, mainly big eyeball ISPs (that host many end-users) and highly connected Tier-1 networks (which can also act as backups for smaller networks that do not host a CDI cache). While this deployment strategy solves the server to end-user distance problem the deployment itself is more complex and thus most likely more costly and time consuming. Because each network becomes a contractual partner, the CDI has, depending on the geographical location the network operates in, to take for example state regulations (e.g., telecommunication acts) or national standard bodies (e.g., for power standards) into account. Last but not least, P2P networks rely on a huge number of end users to store, replicate, and distribute content. As a result P2P networks capacity scales with each user participating. It has been shown to scale well even in case of extreme flash crowds [161]. To name a few examples: one of the largest players in the content delivery business, Akamai, utilizes a highly distributed server infrastructure and operates more than 127, 000 servers in 81 countries distributed across more than 1, 150 networks [12, 128]. Google reportedly operates tens of data centers and front-end server clusters worldwide [76,96,168]. Microsoft has deployed its content delivery infrastructure in 24 locations around the world [117]. Amazon maintains at least 5 large data centers and caches in at least 21 locations around the world [19]. Limelight also utilizes a data center based deployment and operates thousands of servers in more than 22 delivery centers and connects directly to 600 networks worldwide [106]. End-User to Server Assignment A key component of any CDI is the assignment of end-users to servers (or peers in the case of P2P). The ability to assign end-users to servers on small timescale, e.g., in the order of minutes or even tens of seconds, is crucial for CDIs to react to sudden demand surges (flash crowds) and demands shifting from one network to another (regional shift). The assignment strategy of CDIs is also highly relevant with regards to economical aspects of content delivery. The CDI has to resolve the following trade-off: which server delivers the best performance for the end-user while 15

16 Chapter 1 Introduction offering the highest economical return for the CDI 1. This decision includes various important parameters, such as server load, precise network location of end-users, and network conditions (e.g., network bottlenecks or peering cost), some of which require extensive but error prone measurements [3,128,166] by either the CDI or the end-users. Today three main mechanism are used for the assignment of end-users to servers: (1) DNS based redirection, (2) HTTP redirection and (3) IP Anycast. The first solution leverages the fact that before an end-user establishes a connection it resolves a hostname using the Domain Name System (DNS). By transferring the administrative authority of a domain, or more often a subdomain, the CDI is responsible to resolve the hostname. It then is in the position to choose which of the available servers should answer the end-users request. The second solution uses redirection directives included in the HTTP protocol [32]. The main benefit of this solution is the additional information contained in the HTTP request, e.g., the requested object and the end-users IP address, but incurs at least one additional round trip time (RTT) and TCP handshake when a redirection is necessary, as the end-user has to establish a new TCP connection to the new server. The third solution delegates the issue of server selection to the routing layer of the end-users network. This solution has nearly no control over the server selection anymore. We will discuss the details of the drawbacks and benefits of end-user to server assignment methods in Chapter Content Delivery Alliances Although some CDI deployments already have a large global footprint, even the biggest players are still improving it and need deployment strategies for content delivery. Recently Akamai formed content delivery strategic alliances with major ISPs, including AT&T [11], Orange [14], Swisscom [15], and KT [13] to reduce networkrelated costs and improve network efficiency by outsourcing the hardware deployment and maintenance to said network operators. Google offers eyeball networks, that experience high peak traffic from Google s network, the opportunity to host one or more Google Global Caches (GGCs) [26, 75]. Those application specific caches will serve popular Google content including the traffic heavy YouTube video service. Thus, they offer traffic reductions and reduced network utilization to the network operator and improve the performance of the end-users. Netflix, while heavily relying on multiple CDIs, including Limelight and Level3, to deliver its high traffic volumes [2], recently announced to deploy its own content delivery infrastructure, called Open Connect [122], offering network operators that host the free of charge appliance potentially huge traffic reductions while improving the end-user quality of experience. Interestingly enough, Labovitz [49] found that while those servers are located in many networks none of them is a Tier-1 provider. 1 In the case of P2P the economical return encompasses anything that increases the systems total capacity, e.g., faster download times and higher throughput 16

17 1.1 Challenges in Content Delivery The combined efforts of CDIs and network operators clearly marks a paradigm shift in how content delivery infrastructure is deployed and opens up new possibilities for innovative approaches that foster collaboration between CDIs and network operators to take advantage of the business opportunities. After decades of ever increasing deployment for scalability, performance and cost issues, CDIs start noticing the limits in expanding their network footprint. Hereby, we stress that these are often not technical limits, but more business constraints and/or management overhead. In this context the formation of alliances seems to be the natural evolution of the content delivery business. Deployment Agility Unfortunately, the deployment of servers that can satisfy the growing demand while providing good performance to end-users is a complex and tedious task. Finding the right locations to place additional servers without knowledge about the network and its traffic dynamics takes a significant amount of time and is prone to errors and inaccuracies. The necessary business arrangements also require time and effort, as every party wants to get the best possible deal to reduce cost and/or increase revenues. But even when the bargaining is done more time is required to commission the hardware, ship it to the agreed location, physically hook up the servers and connect it to the network. Depending on eventual Service Level Agreements (SLAs) the network operator might need additional time to configure the necessary network devices, e.g., routers, firewall, or intrusion detection systems. Last but not least the CDI s operations team has to install and configure the required software and once the server is fully functional and ready for operation the assignment strategy can include the newly deployed machine. While some of the steps can be done in parallel to speed up the process, the initial search for a suited location and the resulting negotiations take most of the time that can span multiple months limiting the CDIs agility in server deployment [128]. Yet, the deployment is not the only aspect where more flexibility is needed. Once a server is deployed, the physical location of the hardware stays and the negotiated contracts are in place for a longer periods of time, e.g., tens of months to multiple years. This is because for the network operator frequent changes to the network configuration, e.g., physically removing and shipping hardware, updating security policies or possible routing changes, are highly inconvenient and disrupt normal operations. Also the involved re-negotiations impose a high burden on the involved business units of both the network operator and the CDI. Most Content Delivery Infrastructures can handle additions and removal of servers easily, yet shipping around the hardware and reconfiguration of the software means that the additional resources are not available during that time resulting in paid but unusable capacities. Thus, altogether the situation for both, the CDIs and the network operators, are mediocre at best. While the movement of physical hardware and 17

18 Chapter 1 Introduction the resulting network changes should be kept to a minimum to ensure proper network operations it also limits the CDIs ability to react to increasing traffic demands and changes in traffic demand patterns in a timely fashion. This in turn increases the load on the network infrastructure making management and operations more complicated. Optimizing both the network and the content delivery at the same time under multiple, some times even conflicting, constraints while guaranteeing the end-users expected quality of experience is a non-trivial, multi-dimensional optimization problem. Moreover, the market for content delivery as well as network providers is very competitive, leading both parties to investigate new ways to reduce capital investment and operating costs [40,143]. 1.2 Architectures, Trends and Opportunities To address the challenges in content delivery, a variety of system designs have been proposed over the last decade. These solutions try to expand the CDI footprint by leveraging available resources of end-users or dynamically offloading the content delivery to other content delivery infrastructures e.g., in case of capacity bottlenecks or end-users in a network where the CDI has no close by servers. Figure 1.1 gives an overview of the various solutions and shows the level of involvement of each stakeholder in content delivery, namely the Content Producers (CP), Content Delivery Infrastructures (CDI), network operators (ISP), and the end-users. In this classification scheme the different roles are as follows: CPs or Content Producers subsumes any type of business or private entity that has a primary interest (mainly financial) in end-users consuming its content. The content can either be created by the CP or licensed from others. Prominent examples of CPs are, e.g., news and infotainment sites, such as MSNBC or BBC, company websites like Volkswagen or Samsung, and software companies that digitally distribute their software and patches, such as Adobe or Microsoft. CPs that offer mainly third party licensed content include Online Social Networks (OSNs) like Facebook, Video on Demand (VoD) services like YouTube and Netflix. Recall, Netflix and Google. CDIs or Content Delivery Infrastructures operate a dedicated infrastructure to distribute content of CPs to end-users. To offer reasonable performance CDIs need not only to operate enough infrastructure but also establish enough connectivity to the various networks that make up the Internet, be it by distributing servers into many networks or by connecting to them. ISPs or network operators offer network and Internet access including but not limited to end-users and thus transport the content through their network. Well known ISPs are AT&T, Telefonica, or Deutsche Telekom. Last but not least, end-users include everyone consuming content offered by CPs. 18

19 1.2 Architectures, Trends and Opportunities CDI Microdatacenters ISP CP User Figure 1.1: Content Delivery Spectrum Note that in this classification an entity is not limited to a single role. For instance Google takes the dual role of Content Producer and Content Delivery Infrastructure with its YouTube service and in some places of the United States even has a third role by providing Internet access to end-users (e.g., Google Fiber in Kansas City). The Network Oracle The classical approaches for content delivery are commercial CDIs, ISP operated CDIs, and Peer-to-Peer Systems. Commercial CDIs are independent business entities that operate large distributed server infrastructures to deliver content to end-users. They usually do not operate their own network infrastructure but instead rely on ISPs for network connectivity. ISP operated CDIs on the contrary do operate their own network but their server footprint is limited to the network footprint of the ISP. Peer-to-Peer systems are distributed architectures where the resources of the system are provided and operated by the end-users. In Figure 1.1 they are placed very close to their respective operators as the involvement of the other parties is marginal at best. For example, the ISP can throttle the P2P traffic of its customers to reduce the network utilization but this is more an indirect interaction with the content delivery itself. The same holds for peering or transit agreements with CDIs. Not the distribution itself is influenced but the traffic amount or delivery speed at which the distribution happens. 19

20 Chapter 1 Introduction Earlier attempts to improve content delivery have been proposed in the area of P2P systems, which successfully utilize the aggregate capacity of end-users that are interested in downloading the same content [46]. Due to the popularity, openness, and availability of protocol specifications and client software, the research community was able to understand the drawback of such systems. The random connection to other peers (which increases the resilience of the system) in many popular P2P systems has put a high strain not only on the networks hosting the peers but also the connecting transit networks. As a result P4P [177] has been proposed as an ISP- P2P collaboration mechanism to better localize traffic. Augmented with network information, the peer selection can be improved and is able to avoid connection to peers in far away networks. To utilize the systemic benefits of P2P systems and to scale up the infrastructure and at the same time reduce the capital investment in hardware, bandwidth and energy commercial CDIs [3] as well as ISPs [100] operate hybrid content delivery Infrastructures where end-users download content from the CDI servers as well as other end-users, mimicking the success of pure P2P systems. To avoid many of the complicated and time consuming contractual issues when deploying servers, commercial CDIs recently have started to offer their content delivery software to ISPs as licensed CDI. The administrative burdens to deploy, operate, and maintain servers inside their own network is much smaller for them and in some cases the licensed software is able to coordinate with the CDI operated servers forming a CDI Federation. The industries requirement for such an mode of operations has led to the CDNI working group [124] in the IETF which develops standards for necessary mechanisms and protocols. To allow Content Producers to take advantage of the many different CDIs and combining their individual strengths into a sort of virtual content delivery infrastructure Meta CDIs [59, 108] add an additional layer of abstraction to the process of content delivery. The Meta CDI selects for each end-user individually which of multiple available CDIs is used to deliver the desired content. This decision is based on multiple factors such as the network location of the user and the measured CDI performance among others and allows the Content Producer to influence the delivery process and at the same time improves the performance for the end-users. So far all of the presented infrastructures and solutions were general purpose architectures. But some applications can benefit even more from an application specific optimization (examples are rate limiting for video streaming or server selection based on consistent hashing for very large files). This has led larger CPs to deploy application specific CDIs inside ISPs and highly connected data centers. Examples include Netflix Open Connect for video streaming [122] or Google Global Cache primarily for YouTube [26]. 20

21 1.2 Architectures, Trends and Opportunities The New Cloud In the broadest sense todays Internet is an entanglement of dump plumbing to forward packets along paths and highly integrated services to provide additional in-network features such as caching, carrier-grade NAT, load balancing, or security features like intrusion detection or virus filtering. The launch of a new network service often requires another variety of proprietary hardware applicances and includes the increasingly difficult task to find the necessary space and power to accommodate these boxes. These difficulties and the need for a more service centric network has spurred another recent trend: marry cloud resources (processing and storage) with networking resources to meet the high performance requirements of bandwidth and storage critical applications such as high definition video streaming or delay sensitive applications like cloud gaming [153]. Improvements in virtualization technology and recent developments in network equipment architectures like Software Defined Networking (SDN) allows ISPs to migrate from proprietary hardware solutions to software based ones running on generic appliances deployed deep inside their network. While their initial intent often was to support only their own ISP specific services, such as ISP-operated CDNs, IPTV, carrier-grade NAT, deep packet inspection, etc., network operators now leverage these new capabilities to offer fully virtualized network and server resources in proximity to their end-users to third parties [54]. Major network operators around the globe, including AT&T, British Telekom, NTT, Deutsche Telekom and Telefonica, have recently joined their efforts to define the requirements for such a solution. Their draft, called Network Functions Virtualisation (NFV) [123], is currently in progress of standardization in the European Telecommunications Standards Institute (ETSI) [62]. The goal is to drastically reduce the complexity and number of different types of networking equipment by consolidating to an industry standard high volume server for fixed as well as mobile networks. A much anticipated side effect of such a solution is the avoidance of vendor lock-ins. These general purpose appliances, also called microdatacenters, are already deployed by large ISPs, including AT&T, Deutsche Telekom, and Telefonica, co-located with their major network aggregation locations (PoPs). Other networking technologies, including SDN, aim to simplify network operations by decoupling the control plane from the data plane. SDN offers a logically centralized, programmable control of network traffic by introducing an abstraction layer of lower level functionality (e.g., forwarding data packets). Albeit reducing the dependency on vendor specific hardware, SDN nonetheless requires the network operators to replace their current networking equipment. The SDN approach is orthogonal and highly complementary to the introduction and deployment of NFV: either technology can be deployed independently from the other. In combination with cloud style computing, such as microdatacenters, SDN blurs the lines between networks and computing even further. The biggest advantage integrated network and cloud providers can offer is 21

22 Chapter 1 Introduction the ability to offer high quality cloud services as they control all resources on the path from the server to the end-user. At the same time when the cloud started to move into the network, the research community started to leverage cloud resources to outsource most if not all of the network infrastructure (except the forwarding plane of course) and its control plane [17, 72, 157, 162]. By doing so network operators leverage the highly specialized knowledge in domain specific operations of the service provider to improve their own operations while reducing investment in up-to date technology and hardware and at the same time also reduce operational costs. Liu et al. argue in [107] for network providers to deploy ingress filtering to offer filtering of spoofed IP traffic to other networks as a service, not only to improve the efficiency of filtering spoofed IP traffic but also to create new revenue streams for the network operators at the same time. Sherry et al. show in [159] that it is not only possible to outsource nearly any network middlebox, such as firewalls, proxies, or even WAN optimizers, without impact on their performance but also to reduce their management complexity, cost, and capacity limits. Improving the situation even further, Olteanu et al. show that efficient migration of stateful middleboxes in cloud environments is feasible [129]. Kotronis et al. go even further and propose a system to completely outsource the routing control of a network to a third party service provider [94]. This enables the routing service provider to leverage a bird s eye view on network clusters for making efficient routing decisions, detect and troubleshoot policy conflicts, and routing problems for improved efficiency and reduces operational cost. The ability to outsource network infrastructure enables ISPs to leverage economy of scale by deploying microdatacenters deep inside their network and utilize it for their own needs as well as capitalize on offering cloud resource close to the end-users to service providers, e.g., content delivery infrastructures. So far our discussion about improving content delivery has touched the technical possibilities but neglected the incentives improving economics and market share. Both are key drivers towards collaboration which has been be observed in both the content delivery and the network operation business. On the one hand, large and already well established Content Delivery Infrastructures have a strong customer base among Content Providers and are responsible for delivering the content for their customers to the end-users around the world. Network operators on the other hand have a strong end-user base in their service region and are starting to offer cloud resources close to their end-users in aggregation locations (PoPs) of their network. 1.3 Problem Statement Today s content delivery landscape faces the problem of server allocation where to place additional server resources and user assignment which end-user is assigned to which server. This is because CDIs are largely unaware of network conditions 22

23 1.4 Contributions and end-user locations inside the ISPs network. However, this information has the potential to highly improve the efficiency and performance of allocating additional resources and assigning end-users to servers [66, 69, 137]. While some of this information can be inferred by measurements [128,137] a tedious and error prone task a network operator has the information readily at hand! Therefore, we argue that collaboration between CDIs and ISPs is the next step in the natural evolution of deploying and operating content delivery infrastructures in the Internet. 1.4 Contributions Despite the opportunities and benefits for collaboration, the mechanisms and systems to enable joint CDI deployment and operation inside the network are subject of this thesis. Therefore, we highlight the technical means leading to a win-win situation for all involved parties in content delivery. The contributions of this thesis are as follows: Content Delivery Landscape First, the large spectrum of available content delivery architectures motivate us to investigate the current design and operating space of todays content delivery landscape and highlight the challenges content distribution faces. We find that the content delivery landscape is in a constant flux to further improve its delivery performance, increase its network footprint, and at the same time tries to reduce the capital investment and operational costs for its content delivery infrastructure. To quantify the potential benefits of a collaborative operation of content delivery infrastructures, we conduct a large scale measurement study of the largest commercial CDIs operations in an European Tier-1 ISP. We find that ample opportunities exist to leverage the ISPs knowledge about the current network state to enable better leverage the CDIs current infrastructure footprint. The New Cloud Second, we identify two key enablers for collaboration in content delivery, namely informed user-server assignment and in-network server allocation. Until now, both problems have been tackled in a one-sided fashion by the CDIs. While informed user-server assignment improves the operation of already deployed content delivery infrastructures by taking network conditions, such as link utilization or number of backbone hops, into account, in-network server allocation offers an additional degree of freedom for the deployment of additional resources. It allows the CDI to freshly instantiate, migrate or shut down additional resources deep inside the ISPs network close to the end-users on short time scales, e.g., tens of minutes. Together the two 23

24 Chapter 1 Introduction enablers allow a joint optimization of network operations for mutual benefits and enables the deployment of new and highly demanding services and applications. This motivates us to propose a novel system design incorporating the two key enablers to improve content delivery through collaboration between CDIs and ISPs. NetPaaS Third, we implement and evaluate a prototype system, called NetPaaS (Network Platform as a Service), realizing our design for collaborative server deployment and operation inside the ISP s network. We perform a first-of-its-kind evaluation based on traces from the largest commercial CDI and a large European Tier-1 ISP using NetPaaS. We report on the benefits for CDIs, ISPs, and end-users. Our results show that CDI-ISP collaboration leads to a win-win situation with regards to the deployment and operation of servers within the network, and significantly improves end-user performance. Our evaluation shows, that in the studied setting NetPaaS is able to reduce the overall network traffic by up to 7% and lower the utilization of the most congested link in the network by up to 60% when used solely for informed userserver assignment. When NetPaaS also offers in-network server allocation the delay for end-users is reduced significantly and up to 48% of all requests can be answered by a server located in the same PoP as the end-user with only 50 additional servers. 1.5 Outline The rest of the thesis is structured as follows: Chapter 2 gives the necessary background information about protocols and technologies used in todays Internet content delivery. Chapter 3 consists of a survey of the current content delivery landscape, highlighting current and upcoming trends in its architectures and points out current challenges for the involved parties in content delivery. In Chapter 4 we conduct a measurement study of the largest CDIs in an European Tier-1 provider highlighting the opportunities for collaboration to improve content delivery. We identify and formalize two key enablers, namely informed user-server assignment and in-network server allocation, for collaboration between CDIs and network operators in Chapter 5. In Chapter 6 we propose a novel system architecture, called NetPaaS, leveraging the two key enablers to improve content delivery in the Internet. We also discuss the scalability and privacy related issues of the system and how said system can be integrated into todays operation of Content Delivery Infrastructures. Chapter 7 evaluates NetPaaS using operational data from the biggest commercial CDI and an European Tier-1 network provider. We show that joint server deployment between CDIs and ISPs can improve content delivery significantly in the studied setting. 24

25 2 Background In this chapter we review the basic building blocks required to understand todays landscape of content delivery infrastructures. We start by introducing the Internet Service Provider (ISP) as the managing entities of the Internet and continue our excursion with the introduction of the two most important protocols in content delivery today, namely the Domain Name System (DNS) protocol and the Hyper- Text Transfer Protocol (HTTP). We then explain how content delivery works using a short example and describe the general architecture and all relevant components of a Content Delivery Infrastructure (CDI) and. Next, we provide a short overview of Virtualization techniques, as they offer unprecedented flexibility in resource allocation and management and are an essential component of recent large scale infrastructure deployments, such as cloud computing. Last but not least, introduce and shortly discuss the Peer-to-Peer (P2P) paradigm for content delivery. 2.1 The Internet & You: Internet Service Providers The Internet is a world wide network of networks with the infrastructure of those networks provided by Internet Service Providers (ISPs). Generally speaking, an ISP is a business or organization that operates a dedicated network infrastructure and offers Internet access to its customers. The interconnection of multiple individual networks run by ISPs forms what we commonly call the Internet. The general layout is shown in Figure 2.1: End-users and customer networks (e.g., corporate networks) obtain connectivity from ISPs which in turn are interconnected, either directly or 25

26 Chapter 2 Background Global Internet Core Global Transit/ National Backbones "Hyper Giants" Large Content, Consumer Hosting, CDi IXP IXP Regional Tier-2 Providers ISP1 ISP2 Customer IP Networks Figure 2.1: Layout of the Internet Structure [98] through national transit or global backbone providers 1. In addition, the Internet in the last decade has experienced the ascent of a new type of network, the so called Hypergiants. Hypergiants are large networks that mainly host content that end-users are interested in, such as Google and Netflix. They usually generate huge amounts of traffic and thus thrive to directly interconnect with ISPs. The layout shown in Figure 2.1 also highlights the clear distinction between the individual networks run by ISPs and the Internet: the administrative control over the individual network infrastructures remains solely with the ISPs. This also implies that no single entity can coerce control over the Internet as each ISP controls only its own network and the direct connections to other networks. The customers of ISP can be, e.g., end-users, hosting facilities, or even other networks. End-users can be connected via a wide range of access technologies, such as dial-up-modems, digital subscriber line (DSL), fiber to the home (FTTH) or wireless technologies such as 3G, WiMax, or satellite links. If the ISP offers access to endusers via one or more of such technologies, it is also called an access ISP. If other networks use the ISP to reach another network, the ISP is called a transit ISP, as the traffic crosses the ISPs network but neither originates nor terminates in the ISPs network. When the ISP offers other networks connectivity to Internet, that is it allows them to send traffic to the Internet via its own network, the ISP is called an upstream ISP. Note that an ISP can have multiple roles at the same time, e.g., a large access ISP can also offer transit for other networks. To be able to interconnect with other networks an ISP needs to operate an autonomous system (AS). An AS is an administrating entity, generally under the control of one administrative domain, for one or more publicly routable IP prefixes and 1 Transit and backbone operators are basically large network operators with a national or global footprint that offer connectivity to ISPs just like they offer connectivity to their customers. 26

27 2.2 Domain Name System requires an officially assigned and unique autonomous system number (ASN). Both the ASNs and publicly routable IP prefixes are governed by the Internet Assigned Numbers Authority (IANA) which delegates the assignment to the Regional Internet Registires (RIR). Each AS is usually managed by an Interior Gateway Protocol (IGP), e.g., OSPF [120] or ISIS [131]. Since an AS is run centrally by one instance, there is no need for information aggregation and/or hiding. To interconnect different ASes the Border Gateway Protocol (BGP [147]) is the de-facto standard used and provides the required IP prefix reachability information to make routing decisions in the Internet. To keep the distribution of routing information scalable throughout the Internet, the entire internal management of the individual AS is abstracted and aggregated. Each AS announces which IP prefixes can be reached via its network and other networks use this information to make routing decision, that is which network path they use to send traffic along towards its destination. For example in the case of an upstream ISP, the ISP announces all IP prefixes it knows to its customers, while the customers would only announce their own public IP prefixes to the ISP. When an AS needs to communicate with another AS that it does not have a direct connection to, the communication has to transit one or more different ASes. Thus, along with with the pure reachability information, the ASN is also transmitted. This allows for loop detection as well as an estimate of how many AS hops away a destination is. The greatest challenge for an ISP is the efficient operation of its infrastructure. To this end, ISPs usually apply a process called Traffic Engineering (TE). TE is, simply speaking, the process of adjusting the internal routing weights and BGP announcements such that the traffic flows through the network in the most effective way. This is usually done to avoid link congestion and reduce delays by using short paths, but also to reduce the capital expenses by reducing the utilization of expensive peering links. 2.2 Domain Name System Before 1983, a simple plain text file (hosts.txt) was used to translate hostnames into IP addresses. Back then, it was manually distributed to all hosts connected to the Internet. With a growing number of hosts scalability and management issues became more and more rampant. To alleviate them, the Domain Name System (DNS) [118] was introduced in 1983 and has been a key part of the Internet ever since. DNS is a distributed database with a hierarchical structure and divides the complete Internet namespace into domains. As "Naming follows organizational boundaries, not physical networks" [118,167] the administration of domains is organized in zones. This information is distributed using authoritative name servers. The top most level of the DNS hierarchy starts with the root zone using 13 globally distributed and replicated root name servers. To mark the boundary between hierarchy levels in 27

28 Chapter 2 Background. edu org com... duke acm iee cisco arts cs dl www (a) Partial DNS name space with zones (circled). (b) Hostname lookup. domain names the. character is used. The root zone has an empty domain label and therefore is represented by a dot. Responsibility for specific parts of a zone can be delegated to other authoritative name servers which can in turn delegate responsibilty further. For example, the root zone delegates responsibility for, e.g., the.org domain to the Public Interest Registry which in turn delegates responsibility for acm.org to the Association for Computing Machinery (ACM). The information regarding a particular domain of a zone is stored in Resource Records (RRs) which specify the class and type of the record as well as the data describing it. To improve scalability and performance, DNS heavily relies on caching. The time for which a specific RR can be cached is determined by its Time To Live (TTL) and is part of the zone configuration. In the end, each domain is responsible for maintaining its own zone information and operates its own authoritative name server. An alternative view of the domain name space is a tree with nodes containing domain labels separated by dots. Figure 2.2a illustrates this view of the partial domain name hierarchy including the administrative organization into zones. To resolve a domain name, the end-hosts stub resolver usually queries a local name server called caching resolver. If the information is not available in the resolvers cache, it queries the authoritative name server of the domain. In case the resolver does not know how to contact the server, it queries a root name server instead. The root name server refers the resolver to the authoritative name server responsible for the domain directly below the root. This referrals continue until the resolver steps down the domain name space tree from the root to the desired zone and is able to resolve the domain. In our example, the caching resolver is called an iterative resolver, as it iteratively queries the authoritative name servers until it can resolve the hostname, while the end-hosts stub resolver is called a recursive resolver, as it leaves the hostname resolution completely up to the caching resolver. Figure 2.2b illustrates recursive (steps 1 & 8) and iterative (steps 2-7 ) hostname resolution. 28

29 2.3 HyperText Transport Protocol Today, DNS plays a major role in content delivery [19, 37, 117, 128], especially for assigning end-users to CDI servers. Low TTLs enable CDIs to quickly react to demand surges and allows fine grained load balancing. Crafting DNS replies based on the querying caching resolvers geo-location results in short delays and traffic localization. However, such practices have attracted criticism [6,172] largely due to reduced cacheability and increased network load because of low TTLs. Furthermore, the basic assumption that end-users are generally close to the used caching resolver does not always hold true [6]. 2.3 HyperText Transport Protocol The Hypertext Transfer Protocol (HTTP) [63] has become todays de-facto standard to transport content in the Internet [43,71,98,137,151]. Introduced in 1989 by Tim Berners-Lee at CERN (Conseil Européen pour la Recherche Nucléaire) and published in 1991 as version HTTP/0.9 by the World Wide Web Consortium (W3C) [31] and standardized by the Internet Engineering Task Force (IETF) in several Requests for Comments (RFCs) [32, 63, 89, 126] defining HTTP as an application-level protocol for distributed, collaborative, hypermedia information systems. The version that is today in common use is HTTP/1.1. The upcoming standard HTTP/2.0 is currently under development in the HTTPbis working group [125]. HTTP is a simple plain-text request-response protocol on top of TCP/IP 2 and follows a client-server architecture. It allows end-users to request, modify, add or delete resources identified by Uniform Resource Identifiers (URIs) or Unified Resource Locators (URLs), but today both are used as synonyms [116]. A valid URI consists of three parts: the protocol schema (e.g., for HTTP), the domain name (such as but a literal IP address is also possible) and the full path to the resource (for example /path/to/resource. The resulting URI from our example would be: The type of the resource often corresponds to a file but can also be dynamically assembled content or the output of an executable on the Web server. Every HTTP message consists of an introductory line, optional header lines specifying additional information and a potentially empty message body carrying the actual data. The introductory line of a HTTP request, see Listing 2.1 (left), consists of a method and the URI it should act upon. Similarly, the introductory line of a reply, see Listing 2.1 (right), contains a standardized three-digit status code and a textual representation specifying if the request was successful or not. Although primarily designed for the use in the Web, HTTP supports more operations than fetching a Web page. For a full list of available methods and status codes in HTTP/1.1 and their description, see Table A.1 and Table A.2 in the appendix. Both request and 2 Although the RFC mentions the possibility to use UDP as well it is not widely used today. 29

30 Chapter 2 Background GET / HTTP/1.1 HTTP/ OK Host: Accept-Ranges: bytes User-Agent: Mozilla/5.0 [...] Content-Type: text/html; charset=utf-8 Accept: text/html [...] Date: Mon, 29 Jul :46:02 GMT Accept-Language: en-us,en;q=0.5 ETag: " f6-4db31b2978ec0" Accept-Encoding: gzip, deflate Last-Modified: Thu, 25 Apr :13:23 GMT Connection: keep-alive Server: ECS (iad/1984) X-Cache: HIT Content-Length: 1270 <!doctype html> <html> [...] Listing 2.1: HTTP request (left) and response (right) for reply messages may be followed by one or more header lines, see lines 2 9 in Listing 2.1, specifying additional information, e.g., the character set the client accepts or for how long a client may cache the response. Some headers are only valid in requests, others only in replies and some are valid in either direction. For a list of standardized HTTP headers, see e.g., [63,127]. To improve performance and efficiency HTTP has built-in support for caching of content. The Expires header tells a client for how long a response can be considered valid and thus loaded from the local cache. Yet, not all answers come with an expires header, what makes caching non trivial. Therefore, HTTP supports a conditional GET where the server transmits the object only if it has changed since it was transferred to the client. For this the client can use, e.g., the Last-Modified (see line 6 of the HTTP reply in Listing 2.1) or If-Modified-Since headers in the request. Another important mechanism supported by HTTP is redirection. The 3xx status codes allows an Web server to redirect individual users to other servers, e.g., if the Web server is under high load or another Web server is closer to the client. However, the drawback of redirection is the additional delay due to having to open another TCP connection to the new Web server. Although HTTP in itself is a stateless protocol that is the server does not need to keep state between successive requests from the same client technologies such as session parameters or HTTP cookies enable Web sites to keep state. In both cases the state is stored on the client side and is transferred to the Web server with each request. Session parameters are simply key value pairs that can be attached to the URI. Cookies are small pieces of data stored on the end-users computer by websites. Such state information is usually required by dynamic content, such as personalized Web pages, or for authentication purposes. 30

31 2.4 Content Delivery Infrastructures The most recent version HTTP/1.1 includes some changes to improve the overall performance of the protocol. While HTTP/1.0 did close the underlying TCP connection after it received the requested resource, HTTP/1.1 supports persistent connections, sometimes also called HTTP keep-alive or HTTP connection reuse. It allows a client to receive multiple resources over a single TCP connection by sending a new request after the response to the previous request. This avoids additional delay cause by the necessary TCP 3-way handshake and bandwidth limitations due to the slow start phase of newly created TCP connections. The HTTP connection in Listing 2.1 uses this feature, see line 7. In addition to that, HTTP/1.1 supports pipelining, that is, multiple resources can be requested by the client without waiting for the respective responses from the Web server which greatly reduces the time to load multiple resources especially on high delay connections, such as satellite links. HTTP/2.0 is expected to substantially improve end-user perceived latency through asynchronous connection multiplexing, header compression, and request-response pipelining. Therefore, it does not require multiple TCP connections to leverage parallelism and thus improves the use of TCP, especially regarding TCP s congestion control mechanisms. HTTP/2.0 retains the semantics of HTTP/1.1 and therefore leverage existing standardization on HTTP methods, status codes, URIs, and where appropriate, header fields. For more information, see [125]. 2.4 Content Delivery Infrastructures Over the past decades the demand for content has seen phenomenal growth and is still expected to grow [43]. In addition, many already and newly deployed services gain additional benefits from improved performance, e.g., reduced latency, in content delivery [91]. The need for increased capacity and improved performance has led to the emergence of Content Delivery Infrastructures (CDIs): large dedicated infrastructures to deliver content to end-users around the world. Traditionally, content is placed first on the Web servers of the Content Producer (CP), the original Web servers. Content delivery infrastructures are specifically designed to reduce the load on the origin servers and at to improve the performance of end-users. In general, there are three main components in a CDI architecture: a server deployment, a content replication strategy and a mechanism for directing users to servers. But not all CDIs are built upon the same philosophy, design, and technology. The server deployment strategy is one of the most crucial factors in any CDI architecture and has a high influence on the possible performance gains. Therefore, we dedicate a full chapter to the CDI deployment strategies: Chapter 3 gives a detailed overview of the current content delivery landscape and we discuss the challenges content delivery faces today in Chapter 3.1. The classical deployment strategies for content delivery infrastructures are described in Chapter 3.2 and in Chapter 3.3 we introduce emerging trends in content delivery, such as Hybrid and Meta CDIs. 31

32 Chapter 2 Background In the remainder of this section we want to introduce the different possible solutions for the three main components of a CDI and discuss their various benefits and drawbacks. To introduce the general concept of content distribution, Chapter provides an illustrative example of how content delivery using CDI resources works in general. Chapter introduces the two main concepts for content replication: push based and pull based content replication. In Chapter 2.4.3, we will introduce the different mechanisms to assign end-users to CDI server and discuss their benefits and drawbacks. Remember that the detailed discussion on the different deployment strategies is left for the next chapter Content Delivery 101 The goal of this section is to introduce the general concept of content delivery in the Internet. Figure 2.3 shows an example of how content delivery infrastructures are embedded into the Internet architecture and how the resulting traffic flows to the end-users look like. Recall, the Internet is a global system of interconnected Autonomous Systems (ASes), each operated by an Internet Service Provider (ISP), see Chapter 2.1. The example shows three ASes, numbered 1 3, with each AS operating a couple of backbone routers. For inter-connectivity, AS1 has established a peering link with AS2 and AS3 while AS2 and AS3 have established two peering links. A Content Producer (CP), example.com, utilizes a centralized hosting infrastructure in AS2 to deliver the HTML Web page depicted in Figure 2.4. The Web page also contains two images, img1.png and img2.png, that are distributed by two different CDIs, cdi-a.com and cdi-b.com. The server location differs from CDI to CDI and depend on contractual agreements between the CDI and the individual ISPs. In some cases the servers are deployed in the data centers of the ISP or deep within the network, e.g., co-located in the network aggregation points (PoPs), and therefore belong to the same AS. End-users of those ISPs are typically served by the CDI servers inside the ISPs network. The first CDI, cdi-a.com utilizes such an approach and has deployed its servers deep inside the network of AS1, location α, and AS3, location β. In other cases CDIs utilize multiple well connected datacenters with direct peerings to ISPs. The second CDI, cdi-b.com, utilizes this approach and has servers deployed in two datacenters to deliver content to the end-users. Datacenter I has a direct peering with AS1 while datacenter II is multihomed 3 with connectivity to AS1 and AS3. With other ISPs there may be no relationship with the CDI at all and the traffic to the end-users of those ISPs is routed via another AS, the so called transit AS. Let us consider the steps that are necessary to download the Web page shown in Figure 2.4. This page consists of the main HTML page index.html located at http: //www.example.com/index.html and two embedded image objects, img1.png and 3 Multihoming describes the fact that the datacenter is connected to more than one network providing Internet access. 32

33 2.4 Content Delivery Infrastructures Legend server (origin) server (CDI) end-user Figure 2.3: Example of CDI deployments and traffic flows (Web traffic demands). img2.png located at and png respectively. The Content Producer responsible for example.com has decided to use the services of two CDIs to deliver the embedded images, while the main HTML page (index.html) is served from the CPs own centralized hosting infrastructure in AS2. The first image (img1.png) is hosted by cdi-a.com and the second image (img2.png) by cdi-b.com. The resulting traffic flows are shown in Figure 2.3. If a specific client from client set A in AS1 requests the Web page at it first resolves the hostname using the Domain Name System (DNS) which returns the IP address of a server from the centralized hosting infrastructure of the CP in AS2. The client then utilizes the HTTP protocol to connect to the Web server and requests the HTML page index.html. After receiving the Web page the client needs to get the two embedded image objects to be able to render the full Web page. It will again resolve the hostnames using DNS and the CDIs in question will return the IP address of the nearest server based on the clients location. In the case of our client from set A, cdi-a.com will utilize a server from location α in AS1 to deliver img1.png, while cdi-b.com uses datacenter I to serve the second image object img2.png. In contrast, if a specific client from client set B requests the Web page, the two image objects hosted on the CDI infrastructure are delivered from different servers, namely a server in location β for cdi-a.com and another server in datacenter II for cdi-b.com respectively. The main HTML page index.html on the other hand is still delivered from the centralized hosting infrastructure of the CP in AS2. The resulting traffic 33

34 Chapter 2 Background URL: URL: cdi-a.com/img1.png URL: cdi-b.com/img2.png Figure 2.4: Example Web page with some CDN content. flows are depicted in Figure 2.4, which also shows the advantage of utilizing CDIs to deliver content, namely the shorter distance between the end-user and the server delivering the content and to some extend the avoidance of inter-as peering links Content Replication Content replication in the context of content delivery infrastructures describes the process of duplication and distribution of content from the origin Web server to the CDI servers which store the content locally for fast access. This enables the CDI server to satisfy requests for content directly from the local storage, the so called cache, without the need to fetch it from the origin Web server first. An important aspect of content replication is the coherence of the content in the local cache and the origin Web server. The content replication mechanism in place must ensure that the content stored in and served from the local cache is the same as if served from the origin web server. Highly related to the content replication mechanism is the caching algorithm which is used to determine which objects are stored, updated or evicted. There is an entire field of research dedicated to this area and thus out of scope for this thesis. For more information see, e.g., [1,28,52,136,173]. A very simple form of content replication implies having a local copy of all objects from the origin Web server. But the tremendous amount of content with frequent additions and updates in combination with the huge number of servers that constitute todays content delivery infrastructures make this approach technically and economically infeasible. So far mainly two different content replication strategies are employed in content delivery today: In pull based content replication a request for content that is not available in the local cache will trigger a recursive request at the CDI server. When a requested object is not locally available the server will first try to fetch it from neighboring servers 34

35 2.4 Content Delivery Infrastructures in the same cluster or region. If the object is not available at neighboring servers, the origin server responsible for the object is contacted to retrieve the object. The received object is first stored in the local cache and then delivered to the end-user. To keep the content up to date, objects are usually assigned a time-to-live (TTL) value, which describes for how long this copy can be considered valid. If the TTL of an object is no longer valid it can be re-fetched or evicted from the cache. The pull based content replication strategy allows the CDI to assign any user to any cache as it ensures that the content, if not locally available, will be fetched from the origin server and then served to the end-user. This increases the scalability of the content delivery infrastructure [169] and is used by many CDIs today [128]. Yet, a slight drawback exists, the first request for each object will result in a cache miss and the resulting recursive request will induce an increased delay for the end-user that issues the original request. Also the limited local storage might result in objects being evicted from the cache and thus again create cache misses and increased delays. Push based replication describes the approach where content is duplicated and actively distributed or pushed to some or all CDI servers. This strategy tries to avoid the inital cache miss that is inherent in the pull based content replication approach and allows the CDI to pre-populate the servers before the demand for content is expected to begin. This scenario is especially interesting for large scale events that can be planned in advance, e.g., airing a new episode of a popular TV series. Moreover, it alleviates the need for a caching algorithm as the required local storage is known in advance. In contast to the pull based content replication approach, the push based approach does not allow the CDI to assign end-users to arbitrary servers but requires the decision to consider the locally stored objects on each server of the content delivery infrastructure. Considering the huge number of servers of todays content delivery infrastrucures and the tremendous amount of storage (and thus objects) modern servers have, this is by no means an easy task. As a result, the complexity of this approach and thus the whole content delivery system is increased manyfold. Moreover, every mistake, even when caused by e.g., faulty or mis-behaving middleboxes, in the server assignment will result in object or (even wose) page load errors deminishing the end-users quality of experience significantly. However, combined with pull based content replication, this approach is actively used, especially by CDIs delivering large objects, e.g., high definition video or software End-User to Server Assignment To complete the picture one question remains. How does the CDN choose the nearest server to deliver the content from? Today s CDN landscape relies mainly on three techniques to assign end-users to servers. 1. IP-Anycast 2. DNS based redirection 3. HTTP redirection 35

36 Chapter 2 Background While all techniques help the CDNs to assign end-users to their servers, all of them have different drawbacks, the most notable being the possible inaccuracy due to end-user mis-location. Chapter 3.1 will provide more details on this and other challenges content delivery faces today and Chapter presents various solutions to overcome some of them. The remainder of this section will explain how the different techniques for assigning end-users to CDI servers work and also shortly discusses their limitations: IP-Anycast: IP Anycast is a routing technique used to send IP packets to the topologically closest member of a group of potential CDN servers. IP Anycast is usually realized by announcing the destination address from multiple locations in a network or on the Internet. Since the same IP address is available at multiple locations, the routing process selects the shortest route for the destination according to its configuration. Simply speaking, each router in a network selects one of the locations the Anycasted IP is announced from based on the used routing metrics (e.g., path length or routing weights) and configures a route towards it. Note that, if a network learns of an Anycasted IP address from different sources, it does not necessarily direct all its traffic to one of its locations. Its routing can decide to send packets from region A in the network to location A while region B gets a route to location B. This means that the entire server selection of a CDN becomes trivial as it is now a part of the routing process. This means that the CDN loses control of how the users are mapped to the server because the network calculates the routing based on its own metrics. Another issue is that the routing in a network is optimized based on the ISPs criteria which might not be the same as the CDNs or even contrary. Thus the nearest server might not be the best one the CDN could offer. DNS based redirection: Today most CDNs rely on the Domain Name System (DNS) to direct users to appropriate servers. When requesting content, the end user typically asks a DNS resolver, e.g., the resolver of its ISP, for the resolution of a domain name. The resolver then asks the authoritative server for the domain. This can be the CDN s authoritative server, or the the content provider s authoritative server, which then delegates to the CDN s authoritative server. At this point the CDN selects the server for this request based on where the request comes from. But the request does not come directly from the end-user but from its DNS resolver! Thus, the CDN can only select a server based on the IP address of the end user s DNS resolver. To improve the mapping of end users to servers, the client-ip edns extension [48] has been recently proposed. Criteria for server selection include the availability of the server, the proximity of the server to the resolver, and the monetary cost of delivering the content. For proximity estimations the CDNs rely heavily on network measurements [128] and geolocation information [114] to figure out which of their servers is close by and has the best network path performance. A recent study [6] showed that sometimes the end user is not close to the resolver and another study points out that geolocation databases can not be relied upon [141]. Thus the proximity estimations for the nearest CDN server highly depend on the quality and precision of network measurements and a proper DNS deployment of the ISPs. 36

37 2.5 Virtualization HTTP redirection: The Hypertext Transfer Protocol (HTTP) is today s de-facto standard to transport content in the Internet (see Chapter 4.1.1). The protocol incorporates a mechanism to redirect users at the application level at least since it was standardized as version 1.0 in 1996 [32]. By sending an appropriate HTTP status code (HTTP status codes 3xx, see Chapter 2.3) the web server can tell the connected user that a requested object is available from another URL, which can also point to another server. This allows a CDN to redirect an end-user to another server. Reasons for this might include limited server capacities, poor transfer performance or when another server is closer to the end-user, e.g., a client from the US connecting to a server in Europe although the CDN has servers in the US. The HTTP redirection mechanism has some important benefits over the DNS based approach. First, the CDN directly communicates with the end-user and thus knows the exact destination it sends the traffic to (opposed to the assumption that the DNS resolver is close ). Yet, it still has to estimate the proximity of the end-user using the same methodologies as described in the DNS based case. Second, the CDN already knows which object the end-user requests and can use this information for its decision. It allows a CDN to direct a user towards a server where the content object is already available to improve its cache hit rate. Other important informations includes the size and type of the object. This allows the CDN to optimize the server selection based on the requirements to transfer the object, e.g., for delay sensitive ones like streaming video or more throughput oriented ones like huge software patches. Yet, this improvement comes at a price as the user has to establish a new connection to another server. This includes another DNS lookup to get the servers IP address as well as the whole TCP setup including performance critical phases like slow start. This can repeat itself multiple times before an appropriate server is found, which delays the object delivery even further. 2.5 Virtualization In recent years, virtualization has revolutionized the way we build systems [135]. Major advances in performance, stability and management and the availability of offthe-shelf solutions has led to what we today know as The Cloud : dynamic allocation of virtually unlimited resources on demand. This new deployment paradigm becomes more and more important for any large scale system and therfore is a highly relevant aspect for content delivery architectures. In 1960 IBM developed virtualization originally as a means to partition its large mainframe computers into several logical units. The capability of partitioning available resource allowed multiple processes to run at the same time, thus improving efficiency while at the same time reducing maintenance overhead. Remember, back in this time computers were only capable of running a single process and batchprocessing was considered state of the art in computer science. None of the very 37

38 Chapter 2 Background Guest OS Virtual Hardware Guest OS Virtual Hardware emulation of priviledged operations Virtual Machine Manager Hardware Layer (a) Type 1 Hypervisor. VM managment software Virtual Machine Manager Host OS Hardware Layer Guest OS Virtual Hardware emulation of priviledged operations (b) Type 2 Hypervisor. Figure 2.5: Full Virtualization. basic operating system (OS) technologies we have today, such as interrupts, process, or memory management, were existing back in the 60s [167]. Today virtualization is commonly defined as a technology that introduces an intermediate abstraction layer between the underlying hardware to the operating system (OS) running on top of it. This abstraction layer, usually called virtual machine monitor (VMM) or hypervisor, basically conceals the underlying bare hardware and instead presents exact virtual replicas to the next layer up. This allows the hypervisor to partition the available hardware into one or more logical units called virtual machines (VMs) and thus to run multiple and possibly different OSes in parallel on the same physical hardware [150,167]. The benefits of virtualization are manifold and include: Failure Mitigation: A failure in one VM does not influence the other VMs. Consolidation: Fewer physical machines take up less space and power and require less capital investment in hardware. Management: VMs can be easily allocated, de-allocated or migrated, and the virtual hardware can dynamically adjusted to fit changing requirements. Strong Isolation: VMs are completely isolated from each other and a compromised VM does not result in all VMs being compromised. Today, multiple different approaches of virtualization exists and we next explain them in more detail. Full Virtualization: The Full Virtualization [150] approach completely virtualizes the underlying hardware and exposes exact replicas to the OS(es) running on top. It comes in two different flavors that differ on what the VMM or hypervisor runs on. In the first type, usually referred to as type 1 hypervisor, runs directly on top of the bare 38

39 2.5 Virtualization Applications Applications Applications Applications Guest OS Guest OS Host OS (1) Host OS (2) hypervisor call and native execution Paravirtualization API Virtual Machine Manager Hardware Layer (a) Paravirtualization. Virtualization Layer Host OS Hardware Layer (b) OS Level Virtualization. Figure 2.6: Para- and OS-Virtualization. metal hardware, see Figure 2.5a. In reality, the type 1 hypervisor can be considered an OS as only the hypervisor can execute privileged instructions on the CPU, while the VMs privileged instructions causes a trap to turn control to the hypervisor. The hypervisor will inspect the privileged instruction and emulate the exact behavior of the real hardware. However, to enable this virtualization approach, the CPU needs to support traps for privileged instructions of VMs running in non-privileged mode. To enable virtualization on CPUs without support for privilege traps type 2 hypervisors run on top of an existing host OS, usually as a normal user-space application, see Figure 2.5b. They also provide virtual replicas of the hardware to the virtual machines and manage the access to the physical hardware by fully emulating its behavior in software. In addition, all privileged instructions are replaced by calls to a function that handles the instruction in the hypervisor, a technique known as binary translation. Although one might expect that type 1 hypervisors greatly outperform type 2 ones, this is not the case. The reason for this is that traps require CPU context switches and thus invalidate various caches and branch prediction tables. Type 2 hypervisors replace the corresponding instructions with function calls within the executing process and thus do not incur the context switching overhead. The best known virtualization solutions using the Full Virtualization approach are VMware Workstation [148] and Oracle VirtualBox [130]. Paravirtualization: To further improve the performance of VMs, the Paravirtualization [167] approach requires modifications to the guest OS to make hypervisor calls instead of executing privileged operations, similar to processes making system calls in the OS. To this end, the hypervisor exposes an API (Application Programming Interface) that alleviates the need to emulate peculiar hardware instructions and exact semantics of complicated instructions by shifting the execution of privileged instructions to the hypervisor. This results in a significant performance gain for the guest OS. Prominent examples of the Paravirtualization approach, see Figure 2.6a for an illustration, are Xen [30] and VMWare ESX(i) [148]. 39

40 Chapter 2 Background OS Level Virtualization: In OS Level Virtualization [150] the hardware is virtualized at the OS level, where the "guest" OS shares the environment with the OS running on the hardware, i.e., the running host OS kernel is used to implement the different "guest" environments. Applications running in the guest environments see them as dedicated and isolated OSes. The main advantage of this approach, see Figure 2.6b, is the simplicity of its implementation and almost no performance impact on the application. On the downside, the VMs are limited to the kernel and system environment of the host OS and a compromised VM endangers all other VMs as well, as the attacker gains access to the host. Prominent examples of this virtualization approach are BSD Jails [85], Solaris Containers [99] and Linux VServers [53]. Virtualization Today: Virtualization technology has revolutionized the way systems are built [135] and has seen major advances in terms of performance and stability. Once a major concern, the overhead of virtualization is negligible today, with VM boot up times in the order of tens of seconds and almost no runtime overhead [150, 175]. In addition, many tools for VM management have been developed and today a number of off-the-shelf solutions to spin a virtual server based on detailed requirements [110] are readily available from vendors such as NetApp and Dell. Over the past years, big Cloud providers [19,117] have deployed large virtualized server infrastructures to leverage economy of scale and consolidation potentials to offer VMs as Infrastructure as a Service (IaaS) to its customers. We conclude that virtualization is a mature technology offering flexible ways of deploying and managing server infrastructure on a large scale. 2.6 Peer-to-Peer Networks The P2P paradigm has been very successful in delivering content to end-users. Bit- Torrent [46] is the prime example, used mainly for file sharing and synchronization large amounts of data. Other examples include more delay sensitive applications such as video streaming [59, 97, 109]. Despite the varying and perhaps declining share of P2P traffic measured in different regions of the world [111], Peer-to-Peer Networks still constitutes a significant fraction of the total Internet traffic. Peer-to-peer (P2P) is a distributed system architecture in which all participants, the so called peers, are equally privileged users of the system. A P2P system forms an overlay network on top of existing communication networks (e.g., the Internet). All participating peers of the P2P system are the nodes of the overlay network graph, while the connections between them are the edges. It is possible to extend this definition of edges in the overlay network graph to all known peers, in contrast to all connected peers. Based on how peers connect to each other and thus build the overlay network, we can classify P2P systems into two basic categories: 40

41 2.6 Peer-to-Peer Networks Unstructured: The P2P system does not impose any structure on the overlay network. The peers connect to each other in an arbitrary fashion. Most often peers are chosen randomly. Content lookups are flooded to the network (e.g., Gnutella), resulting in limited scalability, or not offered at all (e.g., plain BitTorrent). Structured: Peers organize themselves following certain criteria and algorithms. The resulting overlay network graphs have specific topologies and properties that usually offer better scalability and faster lookups than unstructured P2P systems (e.g., Kademlia, BitTorrent DHT). The overlay network is mainly used for indexing content and peer discovery while the actual content is usually transferred directly between peers. Thus, the connection between the individual peers has significant impact on both the direct content transfers as well as the performance of the resulting overlay network. This has been shown in previous studies and multiple solutions to improve the peer selection have been proposed [10,18,41,165,177] which are described in detail in Chapter To construct an overlay topology unstructured P2P networks usually employ an arbitrary neighbor selection procedure [163]. This can result in a situation where a node in Frankfurt downloads a large content file from a node in Sydney, while the same information may be available at a node in Berlin. While structured P2P systems follow certain rules and algorithms, the information available to them either has to be inferred by measurements [146] or rely on publicly available information such as routing information [149]. Both options are much less precise and up-to-date compared to the information information an ISP has readily at hand. It has been shown that P2P traffic often crosses network boundaries multiple times [9,86]. This is not necessarily optimal as most network bottlenecks in the Internet are assumed to be either in the access network or on the links between ISPs, but rarely in the backbones of the ISPs [16]. Besides, studies have shown that the desired content is often available in the proximity of interested users [86, 145]. This is due to content language and geographical regions of interest. P2P networks benefit from increasing their traffic locality, as shown by Bindal et. al [34] for the case of BitTorrent. P2P systems usually implement their own routing [20] in the overlay topology. Routing on such an overlay topology is no longer done on a per-prefix basis, but rather on a query or key basis. In unstructured P2P networks, queries are disseminated, e.g., via flooding [73] or random walks, while structured P2P networks often use DHT-based routing systems to locate data [163]. Answers can either be sent directly using the underlay routing [163] or through the overlay network by retracing the query path [73]. By routing through the overlay of P2P nodes, P2P systems hope to use paths with better performance than those available via the Internet native routing [20,152]. However, the benefits of redirecting traffic on an alternative path, e.g., one with larger available bandwidth or lower delay, are not necessarily obvious. While the performance of the P2P system may temporarily improve, the available bandwidth of the newly chosen path may deteriorate due to the traffic added to this path. The ISP has then to redirect some traffic so that other applications using 41

42 Chapter 2 Background this path can receive enough bandwidth. In other words, P2P systems reinvent and re-implement a routing system whose dynamics should be able to explicitly interact with the dynamics of native Internet routing [87, 156]. While a routing underlay as proposed by Nakao et al. [121] can reduce the work duplication, it cannot by itself overcome the problems created by the interaction. Consider a situation where a P2P system imposes a lot of traffic load on an ISP network. This may cause the ISP to change some routing metrics and therefore some paths (at the native routing layer) in order to improve its network utilization. This can however cause a change of routes at the application layer by the P2P system, which may again trigger a response by the ISP, and so on. Peer-to-Peer systems have been shown to scale application capacity well during flash crowds [178]. However, the strength of P2P systems, i.e., anybody can share anything over this technology, also turns out to be a weakness when it comes to content availability. In fact, mostly popular content is available on P2P networks, while older content disappears as users interest in it declines. In the example of BitTorrent, this leads to torrents missing pieces, in which case a download can never be completed. In case of video streaming, the video might simply no longer be available or the number of available peers is too low to sustain the required video bit-rate, resulting in gaps or stuttering of the video stream. Another challenge stems from the fact that in P2P systems peers can choose among all other peers to download content from but only if the have the desired content. Thus, the problem of getting content in a P2P system is actually two-fold: first the user needs to find the content and once it knows of possible peers it can download the content from, it needs to connect to some of them to get the desired content. Therefore, the overhead for locating and sometimes also transferring content in a P2P overlay network causes P2P traffic often to starve other applications like Web traffic of bandwidth [158]. This is because most P2P systems rely on application layer routing based on an overlay topology on top of the Internet, which is largely independent of the Internet routing and topology [9]. As a result P2P systems use more network resources due to traffic crossing the underlying network multiple times. 42

43 3 Content Delivery Infrastructures: Architectures and Trends The previous chapter provided us with the necessary background information to understand how content delivery in the Internet works in general: the protocols and technologies utilized by CDIs and the underlying network structures of the Internet. We now turn our attention on the different architectures and upcoming trends in the content delivery business in more detail. The special focus lies in the different server deployment strategies used by todays content delivery infrastructures. This chapter consists of three parts: First, we discuss the challenges each party involved in the technical process of content delivery faces, namely the network operators (ISPs) and the Content Delivery Infrastructures (CDIs). Second, we give an overview of the deployment strategies of current content delivery architectures and discuss their advantages and drawbacks. Third, we describe emerging trends in CDI architectures and how they tackle the previously explained challenges. 3.1 Challenges in Content Delivery Even today, decades after the launch of commercial content delivery, the challenges in content delivery are still manifold and affect everyone involved in the process of delivering content to end-users around the world. The tremendous growth of traffic in recent years is boon and bane of network operators around the world. On the one hand, the increased demand for content, such 43

44 Chapter 3 Content Delivery Infrastructures: Architectures and Trends Centralized Datacenter Architecture Benefit End-User Location Available o + - -? Network Information Available o o - o? Deployment Agility o o o -?? Network Footprint - o + o + +? Network Integration - o ?? System Complexity - o + o + o +? + Content Provider Business Relationship o? End-User Business Relationship o + -?? - = low o = medium + = high = limited? = unknown Table 3.1: Benefits and drawbacks of classical and emerging CDI architectures. Distributed ISP operated Hybrid Licensed Application Meta Federated as high definition video streaming or rich media websites, is one of the major drivers for end-user to upgrade their Internet access speeds. On the other hand, network operators find that the sheer amount and high volatility of traffic originating from content delivery poses a significant traffic engineering challenge and thus complicates the provisioning of the network [123]. The challenges content delivery systems are faced with are based on the fact that they are largely unaware of the underlying network infrastructure and its conditions. In the best case, the CDI can try to infer the topology and state of the network through measurements, but even with large scale measurements this is a difficult and error prone task, especially if accuracy is necessary. Furthermore, when it comes to short-term congestion and/or avoiding network bottlenecks, measurements are of no use. While many collaborative approaches have been proposed [10, 41, 138, 177] to tackle this issue, we will elaborate on such solutions in Chapter 5.3.1, none of them is in operational use yet. In the following, we describe the challenges network operators and CDIs face in more detail. In addition Table 3.1 summarizes the benefits and drawbacks of the different architectures presented in this Chapter. 44

45 3.1 Challenges in Content Delivery Network Operators (ISPs) ISPs face several challenges regarding the operation of their network infrastructure. With the emergence of content delivery, and especially with the distributed nature of content delivery, be it from CDIs or P2P networks, these operational challenges have increased manifold. Network Provisioning: Network provisioning is an iterative process that encompasses the planning, design, deployment and operation of network infrastructure. It aims at ensuring normal day to day operations as well as meeting the needs of subscribers and operators of possible new services in the future. This process includes proper dimensioning of core routers, link capacities as well as establishing or upgrading peering links and locations with other network operators. Adequate network provisioning depends on realistic traffic demand forecasts in terms of volume and origin. However, with the emergence of CDIs and P2P networks, network provisioning has become much more complex. The sheer amount of traffic generated by such infrastructures and especially the high volatility of such traffic poses a significant challenge for any network planning. Volatile Content Traffic: CDIs and P2P networks strive to optimize their own operational overhead and thus choose the most suitable server or peer based on their own criteria. As a result, traffic originating from content delivery is highly volatile, both spatial and temporal. With highly distributed CDIs and global scale P2P networks it becomes increasingly difficult for ISP to predict where traffic enters the network at what time and in which quantities diminishing the value of additional peering locations. Time-wise short-lived demand surges, called flashcrowds, and a much higher demand during peak hours, also known as diurnal traffic pattern, complicate things further as provisioning for peak demand becomes economically infeasible. Together, these effects also have a direct implication on the traffic engineering capabilities of ISPs: traffic engineering is usually based on traffic predictions from past network traffic patterns and requires some time to take effect. Customer Satisfaction: Regardless of the increased difficulty with network provisioning and traffic engineering, end-users are demanding more and larger content, especially since the availability of high definition video services such as Netflix and YouTube. Coupled with the dominant form of customer subscriptions, flat rate based Internet access tariffs, the pressure on ISPs to reduce capital and operational costs, e.g., delay network upgrades or reduce management complexity, to keep prices competitive is enormous. Yet, pushing network utilization too far increases, e.g., packet loss and delay, and drastically reduces the Quality of Experience (QoE) of the enduser. This in turn paints a negative picture of the ISP in question and encourages end-users to cancel their subscription or switch to another provider with a better reputation. 45

46 Chapter 3 Content Delivery Infrastructures: Architectures and Trends Content Delivery Infrastructures (CDIs) Economics, especially cost reduction, is a main concern today in content delivery as Internet traffic grows at a annual rate of 30% [43]. Moreover, commercial-grade applications delivered by CDIs often have requirements in terms of end-to-end delay [96]. Faster and more reliable content delivery results in higher revenues for e-commerce and streaming applications [102, 128] as well as user engagement [59]. Therefore, the network latency between the end-user and the CDI server is the key metric for optimizing the infrastructure. Although CDIs go to great lengths to further improve end-user performance, major obstacles in content delivery still exist. Network Bottlenecks: Despite their efforts to discover end-to-end characteristics between servers and end-users to predict performance [96,128], CDIs have limited information about the actual network conditions. Tracking the ever changing network conditions, i.e., through active measurements and end-user reports, incurs an extensive overhead for the CDI without a guarantee of performance improvements for the end-user. Without sufficient information about the characteristics of the network paths between the CDI servers and the end-user, the CDIs end-user assignment can lead to additional load on existing network bottlenecks, or even create new ones. End-User Mis-location: DNS requests received by the CDIs authoritative DNS servers originate from the DNS resolver of the end-user, not from the end-user themselves. The assignment of end-users to servers is therefore based on the assumption that end-users are close to their DNS resolvers. Recent studies have shown that in many cases this assumption does not hold [6, 112]. As a result, the end-user is mis-located and the server assignment is not optimal. As a response to this issue, two DNS extensions have been proposed to include the end-users IP [48] or subnet information [47,132]. Limited Deployment Agility: To cope with the ever increasing demand for content CDIs have deployed massively distributed infrastructures. But increasing the network footprint is becoming increasingly challenging. On the one hand the management overhead for deploying additional servers inside a network takes significant time and effort due to contract negotiations, limited space and power supply in aggregation points and intense competition, sometimes even by the ISPs. On the other hand the traffic demand is extremely volatile, especially because peak traffic is the fastest growing part, which makes provisioning difficult and finding the right location for the server even harder. A location that might look good today might be underutilized in the future, but the contracts usually run for long periods of time. Content Delivery Cost: Finally, CDIs strive to minimize the overall cost of delivering huge amounts of content to end-users. To that end, their assignment strategy is mainly driven by economic aspects such as bandwidth or energy cost [108, 143]. While a CDI will try to assign end-users in such a way that the server can deliver reasonable performance, this does not always result in end-users being assigned to 46

47 3.2 Content Delivery Landscape the server able to deliver the best performance. Moreover, the intense competition in the content delivery market has led to diminishing returns of delivering traffic to end-users. Part of the delivery cost is also the maintenance and constant upgrading of hardware and peering capacity in many locations [128]. 3.2 Content Delivery Landscape To cope with the continuously growing end-user demand for content and to ensure the required quality levels in content delivery, CDIs have deployed huge distributed server infrastructures that replicate and distribute popular content in many different locations on the Internet [7, 102], posing significant deployment challenges. To complicate matters further, some of these infrastructures are entangled with the very infrastructures that provide network connectivity to end-users. But not all CDIs are built upon the same philosophy, design, and technology. For example, the required infrastructure for content delivery can be deployed and operated by an independent third party, often referred to as Content Delivery Network (CDN), with the infrastructure deployment strategies ranging from centralized hosting facilities, e.g., renting space in a well connected datacenter or leasing resources in a public cloud, over multiple dedicated datacenters in geographically disperse locations and direct connectivity to all relevant network operators in each region, to a highly distributed deployment of thousands of caches deep inside many different networks. A more specialized CDI architecture is operated by ISPs offering a more network integrated deployment but also limits the CDI footprint to the ISPs own network Independent Content Delivery Content Delivery Infrastructures operated by autonomous third parties, also known as Content Delivery Networks or CDNs, are called independent CDIs because they operate their server infrastructure and deliver content independent from the underlying network that provides the necessary connectivity. Such CDIs usually either negotiate dedicated peering agreements with network operators or pay them for connectivity just as any other customer do, e.g., end-users or corporate networks. Thus, the CDI is not overly concerned with the load it imposes on the network and considers network connectivity simply a service they pay for and leave the management of the network to the operators. However, the load of the network providing connectivity has a significant influence on the end-users performance and recently both the CDIs and network operators have started to look more and more towards collaborative approaches to further optimize content delivery, see Chapter Independent CDIs have a strong customer base of content producers and are responsible for delivering the content of their customers to end-users around the world. Based on traffic volume as well as hosted content, CDIs are today by and large the biggest 47

48 Chapter 3 Content Delivery Infrastructures: Architectures and Trends Internet Figure 3.1: Centralized Hosting players on the Internet, spearheading any recent traffic study and expected to grow in the future. But the content delivery market has become highly competitive with many new entrants like network operators or companies offering cloud computing. In addition, dwindling profit margins in storage and processing [22] further increase the economic pressure. To remain competitive, independent CDIs strive to increase their network footprint, optimize the performance for end-users and, probably most important, try to reduce the content delivery cost itself. The general architecture of CDIs as described in Chapter 2.4 consists of three main components: (1) the deployment of a server infrastructure, (2) a strategy for content replication and (3) a mechanism to direct users to servers. The remainder of this section focuses on the benefits and limitations of current deployments utilized by independent CDIs [102]: centralized hosting, datacenter based, and distributed infrastructures. Centralized Hosting: Centralized hosting is the most traditional deployment strategy for servers and it utilizes a single or a small number of geographical locations, e.g., co-located servers in a datacenter or rented resources from a cloud provider 1, to host and distribute content. This approach is usually used by small sites catering a localized audience, One-Click Hosters, and applications running in the public cloud. Centralized hosting takes advantage of (a) the economies of scale that a single location offers [22], (b) the flexibility that multihoming offers [74], and (c) the connectivity opportunities that IXPs offer [5]. Using multiple geographical disperse locations provides improved performance, due to being closer to different sets of end-users, higher reliability, through redundancy, and offers scalability, by additional resources but at the same time multiplies the management overhead. Yet, for many commercial-grade applications with strict service requirements the performance and reliability falls short of expectations as the end-user experience depends on the absence of middle mile bottlenecks of the Internet. At the same time the 1 Cloud providers usually operate multiple, geographical disperse datacenters that a customer can manually select when requesting new resources. 48

49 3.2 Content Delivery Landscape AS1 AS2 AS3 Figure 3.2: Datacenter Based overall scalability of this deployment strategy is limited as the total capacity of a single location is limited by the existing physical space to place servers, the provided electricity and available connectivity in terms of access bandwidth to the Internet. Another major disadvantage of centralizes hosting is the potential single point of failure, such as disrupted service due to natural desasters, distributed denial of service (DDoS) attacks on parts of the infrastructure affecting the whole deployment or limited to no connectivity in case of cut fiber-optics [102]. In addition, traffic levels fluctuate tremendously, especially during peak-hour [43], and the need to provision for peak traffic can result in underutilized infrastructure most of the time. Moreover, predicting future traffic demands accurately is rather difficult and challenging. Often, a centralized hosting architecture therefore does neither offer sufficient agility to handle unexpected demand surges nor the flexibility to scale the infrastructure for global scale operations. Moreover, it limits the CDIs ability to ensure low latency to end-users located in different networks around the world [105]. Datacenter Based: The datacenter based content delivery architecture can be seen as the natural evolution of the centralized hosting architecture that, simply speaking, merely multiplies the number of locations to deliver content from. The continuous demand for increased capacity and improved performance for end-users of applications and websites on a global scale has driven CDIs to increase their network footprint and delivery capacities in different regions of the world. By switching to a content delivery architecture that comprises of many large and well connected datacenters in highly populated regions in the world enables CDIs to compensate many shortcomings of centralized hosting infrastructures. The availability of multiple redundant, geographical disperse datacenters connected to major Internet backbones and the most important local networks offers reduced latency towards end-users in the region and increases the total deliver capacity of the CDI while further leveraging the economies of scale to reduce the cost for content delivery. Thus, the datacenter based content delivery architecture allows CDIs to further scale up their operations and improves the delivery of content manifold while at the same time reduces the total cost for the content delivery infrastructure. 49

50 Chapter 3 Content Delivery Infrastructures: Architectures and Trends AS1 AS2 AS3 Figure 3.3: Highly Distributed However, even with multiple well connected datacenters in each region of the world, the potential performance improvements are still limited because the CDI servers are still too far away from most of the end-users: due to the long tail distribution of end-users over all the networks that make up the Internet, the requested content for more than 50% of all users needs to traverse many middle mile networks, even when the CDI connects to all major Tier-1 backbones [102]. While the availability of multiple redundant datacenters offers the possibility to avoid network bottlenecks due to increased path diversity, redirecting end-users to another datacenters usually incurs major performance degradation. Also, the end-user to server assignment becomes much more important in such an architecture as selecting the correct (meaning closest) datacenter is the most crucial factor for the end-user experienced performance. Recall that assigning end-users to servers is by no means a trivial task (Chapter discusses the available mechanisms including their benefits and drawbacks) and that significant improvements are possible through collaborative approaches between CDIs and network operators, see Chapter Altogether, the ability to compensate sudden surges in demand is much better compared to centralized hosting but still somewhat limited and the missing agility to react in large shifts of end-user demand without sacrificing performance are the biggest drawbacks of such an architecture. Nonetheless this type of architecture is highly popular. CDNs such as Limelight, EdgeCast, and BitGravity use it as well as many recent cloud computing deployments, such as Amazon CloudFront and Microsoft Azure. Distributed Infrastructures: The third approach to scale up content delivery is using a highly distributed infrastructure: instead of deploying many servers in a few well connected locations, this architecture deploys a relatively small number of servers, usually called clusters, in many networks around the world. This approach scales the CDI vertically by deploying the infrastructure in thousands of networks rather than dozens as in the case of a datacenter based design. The smaller size and power requirements of clusters allow the CDI to push their servers deep inside the networks, usually into aggregation points, often referred to as Point of Presence (PoP). 50

The forces behind the changing Internet: IXPs, content delivery, and virtualization Prof. Steve Uhlig Head of Networks research group Queen Mary, University of London steve@eecs.qmul.ac.uk http://www.eecs.qmul.ac.uk/~steve/

Improving Content Delivery with Content-delivery networks (s) originate a large fraction of Internet traffic; yet, due to how s often perform traffic optimization, users aren t always assigned to the best

The old Software in the Network: What Happened and Where to Go Prof. Eric A. Brewer UC Berkeley Inktomi Corporation Local networks with local names and switches IP creates global namespace and links the

Virtualization, SDN and NFV HOW DO THEY FIT TOGETHER? Traditional networks lack the flexibility to keep pace with dynamic computing and storage needs of today s data centers. In order to implement changes,

The Role of Virtual Routers In Carrier Networks Sterling d Perrin Senior Analyst, Heavy Reading Agenda Definitions of SDN and NFV Benefits of SDN and NFV Challenges and Inhibitors Some Use Cases Some Industry

PRAMAK 1 Optimizing Data Center Networks for Cloud Computing Data Center networks have evolved over time as the nature of computing changed. They evolved to handle the computing models based on main-frames,

What is SDN all about? Emil Gągała Juniper Networks Piotr Jabłoński Cisco Systems In the beginning there was a chaos CLOUD BUILDING BLOCKS CAN I VIRTUALIZE MY Compute Network? Storage Where is my money?

Disaster Recovery Design Ehab Ashary University of Colorado at Colorado Springs As a head of the campus network department in the Deanship of Information Technology at King Abdulaziz University for more

How the Emergence of OpenFlow and SDN will Change the Networking Landscape Software-Defined Networking (SDN) powered by the OpenFlow protocol has the potential to be an important and necessary game-changer

How the emergence of OpenFlow and SDN will change the networking landscape Software-defined networking (SDN) powered by the OpenFlow protocol has the potential to be an important and necessary game-changer

White Paper Overview To accelerate download times for end users and provide a high performance, highly secure foundation for Web-enabled content and applications, networking functions need to be streamlined.

Where Do You Tube? Uncovering YouTube Server Selection Strategy Vijay Kumar Adhikari, Sourabh Jain, Zhi-Li Zhang University of Minnesota- Twin Cities Abstract YouTube is one of the most popular video sharing

Trends in Internet Traffic Patterns Darren Anstee, EMEA Solutions Architect This Talk The End of the Internet as we Know it We present the largest study of Internet traffic every conducted Peer-reviewed

Policy Based QoS support using BGP Routing Priyadarsi Nanda and Andrew James Simmonds Department of Computer Systems Faculty of Information Technology University of Technology, Sydney Broadway, NSW Australia

ANATOMY OF A DDoS ATTACK AGAINST THE DNS INFRASTRUCTURE ANATOMY OF A DDOS ATTACK AGAINST THE DNS INFRASTRUCTURE The Domain Name System (DNS) is part of the functional infrastructure of the Internet and

Bloom Filter based Inter-domain Name Resolution: A Feasibility Study Konstantinos V. Katsaros, Wei Koong Chai and George Pavlou University College London, UK Outline Inter-domain name resolution in ICN

White paper Understanding the Business Case of Network Function Virtualization Part I of the series discusses the telecom market scenario in general, market and business drivers behind push for a building

Business Case for S/Gi Network Simplification Executive Summary Mobile broadband traffic growth is driving large cost increases but revenue is failing to keep pace. Service providers, consequently, are

Page 1 of 8 Computer Networking Networks 9.1 Local area network A local area network (LAN) is a network that connects computers and devices in a limited geographical area such as a home, school, office

Leveraging SDN and NFV in the WAN Introduction Software Defined Networking (SDN) and Network Functions Virtualization (NFV) are two of the key components of the overall movement towards software defined

The Evolution to Local Content Delivery Content caching brings many benefits as operators work out how to cope with the astounding growth in demand for Internet video and other OTT content driven by Broadband

The Role of Carrier Ethernet in Business Applications Examining the Choices for your Business Applications February 2012 Positioning Paper Page 1 of 11 Table of Contents 1 Introduction... 3 2 Characteristics

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY A PATH FOR HORIZING YOUR INNOVATIVE WORK SOFTWARE DEFINED NETWORKING A NEW ARCHETYPE PARNAL P. PAWADE 1, ANIKET A. KATHALKAR

Testing & Assuring Mobile End User Experience Before Production Neotys Agenda Introduction The challenges Best practices NeoLoad mobile capabilities Mobile devices are used more and more At Home In 2014,

the Availability Digest Redundant Load Balancing for High Availability July 2013 A large data center can comprise hundreds or thousands of servers. These servers must not only be interconnected, but they

White Paper Overview To accelerate response times for end users and provide a high performance, highly secure and scalable foundation for Web applications and rich internet content, application networking

White Paper Overview Many enterprises attempt to scale Web and network capacity by deploying additional servers and increased infrastructure at a single location, but centralized architectures are subject

On Characterizing BGP Routing Table Growth Tian Bu, Lixin Gao, and Don Towsley University of Massachusetts, Amherst, MA 0003 Abstract The sizes of the BGP routing tables have increased by an order of magnitude

The Internet Internet Technologies and Applications Aim and Contents Aim: Review the main concepts and technologies used in the Internet Describe the real structure of the Internet today Contents: Internetworking

HIGH-SPEED BRIDGE TO CLOUD STORAGE Addressing throughput bottlenecks with Signiant s SkyDrop 2 The heart of the Internet is a pulsing movement of data circulating among billions of devices worldwide between