DTN Routing as a Resource Allocation Problem

Transcription

1 DTN Routing as a Resource Allocation Problem Aruna Balasubramanian, Brian Neil Levine and Arun Venkataramani Department of Computer Science, University of Massachusetts Amherst, MA, USA ABSTRACT Many DTN routing protocols use a variety of mechanisms, including discovering the meeting probabilities among nodes, packet replication, and network coding. The primary focus of these mechanisms is to increase the likelihood of finding a path with limited information, so these approaches have only an incidental effect on routing metrics such as maximum or average delivery delay. In this paper, we present rapid, an intentional DTN routing protocol that can optimize a specific routing metric such as worst-case delivery delay or the fraction of packets that are delivered within a deadline. The key insight is to treat DTN routing as a resource allocation problem that translates the routing metric into per-packet utilities which determine how packets should be replicated in the system. We evaluate rapid rigorously through a prototype deployed over a vehicular DTN testbed of 4 buses and simulations based on real traces. To our knowledge, this is the first paper to report on a routing protocol deployed on a real DTN at this scale. Our results suggest that rapid significantly outperforms existing routing protocols for several metrics. We also show empirically that for small loads RAPID is within 1% of the optimal performance. Categories and Subject Descriptors C.2 [Computer Communication Network]: Network Protocols Routing Protocols General Terms Design, Performance Keywords DTN, deployment, mobility, routing, utility This work was supported in part by NSF awards CNS and CNS Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. SIGCOMM 7, August 27 31, 27, Kyoto, Japan. Copyright 27 ACM /7/8...$ INTRODUCTION Disruption-tolerant networks (DTNs) enable transfer of data when mobile nodes are connected only intermittently. Applications of DTNs include large-scale disaster recovery networks, sensor networks for ecological monitoring [37], ocean sensor networks [29, 24], peoplenet [26], vehicular networks [27, 6], and projects such as TIER [2], Digital Study Hall [15], One Laptop Per Child [1] to benefit developing nations. Intermittent connectivity can be a result of mobility, power management, wireless range, sparsity, or malicious attacks. The inherent uncertainity about network conditions makes routing in DTNs a challenging problem. The primary focus of many existing DTN routing protocols is to increase the likelihood of finding a path with extremely limited information. To discover such a path, a variety of mechanisms are used including estimating node meeting probabilities, packet replication, network coding, placement of stationary waypoint stores, and using prior knowledge of mobility patterns. Unfortunately, the burden of finding even one path is so great that existing approaches have only an incidental rather than an intentional effect on such routing metrics as worst-case delivery latency, average delay, or percentage of packets delivered. This disconnect between application needs and routing protocols hinders deployment of DTN applications. Currently, it is difficult to drive the routing layer of a DTN by specifying priorities, deadlines, or cost constraints. For example, a simple news and information application is better served by maximizing the number of news stories delivered before they are outdated, rather than maximizing the number of stories eventually delivered. In this paper, we present a resource allocation protocol for intentional DTN (rapid) routing, a protocol designed to explicitly optimize an administrator-specified routing metric. rapid routes a packet by opportunistically replicating it until a copy reaches the destination. rapid translates the routing metric to per-packet utilities that determine at every transfer opportunity if the marginal utility of replicating a packet justifies the resources used. rapid loosely tracks network resources through a control plane to assimilate a local view of global network state. To this end, rapid uses an in-band control channel to exchange network state information among nodes using a fraction of the available bandwidth. rapid s control channel builds on insights from previous work, e.g., Jain et al. [19] suggest that DTN routing protocols that use more knowledge of network conditions perform better, and Burgess et al. [6] show that flooding acknowledgments improves delivery rates by removing useless packets from the network. rapid nodes

2 use the control channel to exchange additional metadata that includes the number and location of replicas of a packet and the average size of past transfers. Even though this information is delayed and inaccurate, the mechanisms in rapid s control plane combined with its utility-driven replication algorithms significantly improve routing performance compared to existing approaches. We have built and deployed rapid on a vehicular DTN testbed, DieselNet [6], that consists of 4 buses covering a 15 square-mile area around Amherst, MA. The buses intermittently connect to each other and each bus carries 82.11b radios and a moderately resourceful computer. We collected 58 days of performance traces of the rapid deployment. To our knowledge, this is the first paper to report on a DTN routing protocol deployed at this scale. Similar testbeds have deployed only flooding as a method of packet propagation [37]. We also conduct a simulation-based evaluation using real traces to stress-test and compare various protocols. To ensure a fair comparison to other DTN protocols (that we did not deploy), we collected traces of the bus-to-bus meeting duration and bandwidth during the 58 days. We then constructed a trace-driven simulation of rapid and we show that the simulator provides performance results that are are within 1% of the real measurements with 95% confidence. We use this simulator to compare rapid to four existing routing protocols [23, 32, 6] and random routing. We also compare the protocols using synthetic mobility models. To show the generality of rapid, we evaluate three separate routing metrics: minimizing average delay, minimizing worst-case delay, and maximizing the number of packets delivered before a deadline. Our experiments using tracedriven and synthetic mobility scenarios show that rapid significantly outperforms four other routing protocols. For example, in trace-driven experiments under moderate-tohigh loads, rapid outperforms the second-best protocol by about 2% for all three metrics, while also delivering 15% more packets for the first two metrics, i.e., rapid delivers more packets better. With a priori mobility information and moderate-to-high loads, rapid outperforms random replication by about 5% for all metrics while delivering 4% more packets. We also compare rapid to an optimal protocol and show empirically that rapid performs within 1% of optimal for low loads. All experiments include the cost of rapid s control channel. In summary, our primary contribution is to demonstrate the feasibility of an intentional routing approach for DTNs. To this end, we present A utlity-driven DTN routing protocol, rapid, instantiated with three different routing metrics: minimizing average delay, minimzing maximum delay, and minimizing the number of packets that miss a deadline (Sections 3 and 4). Deployment and evaluation of rapid on a vehicular testbed to show performance in real scenarios, and to validate our trace-driven simulator (Section 5). Comprehensive experiments using a 58-day trace that show that rapid not only outperforms four other protocols for each routing metric, but also consistently delivers a larger fraction of packets (Section 6). Hardness results to substantiate rapid s heuristic approach, which prove that online algorithms without complete future knowledge and with unlimited computational power, or computationally limited algorithms with complete future knowledge can be arbitrarily far from optimal (Section 3 and appendix). 2. RELATED WORK Replication vs. Forwarding. We classify related existing DTN routing protocols as those that replicate packets and those that forward only a single copy. Epidemic routing protocols replicate packets at transfer opportunities hoping to find a path to a destination. However, naive flooding wastes resources and can severely degrade performance. Proposed protocols attempt to limit replication or otherwise clear useless packets in various ways: (i) using historic meeting information [12, 7, 6, 23]; (ii) removing useless packets using acknowledgments of delivered data [6]; (iii) using probabilistic mobility information to infer delivery [31]; (iv) replicating packets with a small probability [36]; (v) using network coding [35] and coding with redundancy [18]; and (vi) bounding the number of replicas of a packet [32, 31, 25]. In contrast, forwarding routing protocols maintain at most one copy of a packet in the network [19, 2, 34]. Jain et al. [19] propose a forwarding algorithm to minimize the average delay of packet delivery using oracles with varying degrees of future knowledge. Our deployment experience suggests that, even for a scheduled bus service, implementing the simplest oracle is difficult; connection opportunities are affected by many factors in practice including weather, radio interference, and system failure. Furthermore, we present formal hardness results and empirical results to quantify the impact of not having complete knowledge. Jones et al. [2] propose a link-state protocol based on epidemic propagation to disseminate global knowledge, but use a single path to forward a packet. Shah et al. [3] and Spyropoulos et al. [34] present an analytical framework for the forwarding-only case assuming a grid-based mobility model. They subsequently extend the model and propose a replication-based protocol, Spray and Wait [32]. The consensus appears to be [32] that replicating packets can improve performance (or security [5]) over just forwarding, but risk degrading performance when resources are limited. Incidental vs. Intentional. Our position is that most existing schemes only have an incidental effect on desired performance metrics, including commonly evaluated metrics like average delay or delivery probability. Spray and Wait [32] routes to reduce delay metric, but does not take into account bandwidth or storage constraints. Their theoretical intractability in general makes the effect of a particular protocol design decision on the performance of a given resource constrained network scenario unclear. For example, several existing DTN routing algorithms [32, 31, 25, 6] route packets using the number of replicas as the heuristic, but the effect of replication varies with different routing metrics. In contrast, routing in rapid is intentional with respect to a given performance metric. rapid explicitly calculates the effect of replication on the routing metric while accounting for resource constraints. Resource Constraints. rapid also differs from most previous work in its assumptions regarding resource constraints, routing policy, and mo-

3 Problem Storage Bandwidth Routing Previous work (and mobility) P1 Unlimited Unlimited Replication Epidemic [25], Spray and Wait [32]: Constraint in the form of channel contention (Grid-based synthetic) P2 Unlimited Unlimited Forwarding Modified Djikstra s algorithm Jain et al. [19] (simple graph), MobySpace [22] (Powerlaw) P3 Finite Unlimited Replication Davis et al. [12] (Simple partitioning synthetic), SWIM [31] (Exponential), MV [7] (Community-based synthetic), Prophet [23] (Community-based synthetic) P4 Finite Finite Forwarding Jones et al. [2] (AP traces), Jain et al. [19] (Synthetic DTN topology) P5 Finite Finite Replication This paper (Vehicular DTN traces, exponential, and powerlaw meeting probabilities, testbed deployment), MaxProp [6] (Vehicular DTN traces) Table 1: A classification of some related work into DTN routing scenarios bility patterns. Table 1 shows a taxonomy of many existing DTN routing protocols based on assumptions about bandwidth available during transfer opportunities and the storage carried by nodes; both are either finite or unlimited. For each work, we state in parentheses the mobility model used. rapid is a replication-based algorithm that assumes constraints on both storage and bandwidth (P5) the most challenging and most practical problem space. P1 and P2 are important to examine for valuable insights that theoretical tractability yields but are impractical for real DTNs with limited resources. Many studies [23, 12, 7, 31] analyze the case where storage at nodes is limited, but bandwidth is unlimited (P3). This scenario may happen when the radios used and the duration of contacts allow transmission of more data than can be stored by the node. However, we find this scenario to be uncommon typically storage is inexpensive and energy efficient. Trends suggest that high bitrate radios will remain more expensive and energy-intensive than storage [13]. We describe how the basic rapid protocol can be naturally extended to accommodate storage constraints. Finally, for mobile DTNs, and especially vehicular DTNs, transfer opportunities are short-lived [17, 6]. We were unable to find other protocols in P5 except Max- Prop [6] that assume limited storage and bandwidth. However, it is unclear how to optimize a specific routing metric using MaxProp, so we categorize it as an incidental routing protocol. Our experiments indicate that rapid significantly outperforms MaxProp for each considered metric. Some theoretical works [38, 33, 31] derive closed-form expressions for average delay and number of replicas in the system as a function of the number of nodes and mobility patterns. Although these analyses contributed to important insights in the design of rapid, their assumptions about mobility patterns or unlimited resources were, in our experience, too restrictive to be applicable to practical settings. 3. THE RAPID PROTOCOL 3.1 System model We model a DTN as a set of mobile nodes. Two nodes transfer data packets to each other when within communication range. During a transfer, the sender replicates packets while retaining a copy. A node can deliver packets to a destination node directly or via intermediate nodes, but packets may not be fragmented. There is limited storage and transfer bandwidth available to nodes. Destination nodes are assumed to have sufficient capacity to store delivered packets, so only storage for in-transit data is limited. Node meetings are assumed to be short-lived. Formally, a DTN consists of a node meeting schedule and a workload. The node meeting schedule is a directed multigraph G = (V, E), where V and E represent the set of nodes and edges respectively. Each directed edge e between two nodes represents a meeting between them, and it is annotated with a tuple (t e, s e) where t is the time of the meeting and s is the size of the transfer opportunity. The workload is a set of packets P = {(u 1, v 1, s 1, t 1), (u 2, v 2, s 2, t 2),...} where the ith tuple represents the source, destination, size, and time of creation (at the source), respectively, of packet i. The goal of a DTN routing algorithm is to deliver all packets using a feasible schedule of packet transfers, where feasible means that the total size of packets transfered during each opportunity is less than the size of the opportunity, always respecting storage constraints. In comparison to Jain et al.[19] who model link properties as continuous functions of time, our model assumes discrete short-lived transfers; this makes the problem analytically more tractable and well characterizes many practical DTNs. 3.2 The case for a heuristic approach Two fundamental reasons make the case for a heuristic approach to DTN routing. First, the inherent uncertainity of DTN environments rules out provably efficient online routing algorithms. Second, computing optimal solutions is hard even with complete knowledge about the environment. Both hardness results formalized below hold even for unit-sized packets and unit-sized transfer opportunities and assume no storage restriction. Theorem 1. Let ALG be a deterministic online DTN routing algorithm with unlimited computational power. (a) If ALG has complete knowledge of a workload of n packets, but not of the schedule of node meetings, then it is Ω(n)-competitive with an offline adversary with respect to the fraction of packets delivered. (b) If ALG has complete knowledge of the meeting schedule, but not of the packet workload, then it can deliver at most a third of packets compared to an optimal offline adversary. Theorem 2. Given complete knowledge of node meetings and the packet workload a priori, computing a routing schedule that is optimal with respect to the number of packets delivered is NP-hard with an Ω( p (n)) lower bound on approximability.

4 D(i) Packet i s expected delay = T (i) + A(i) T (i) Time since creation of i a(i) variable that determines the remaining time to deliver i A(i) Expected remaining time = E[a(i)] M XZ variable that determines intermeeting time between nodes X and Z Table 2: List of commonly used variables. The proofs are outlined in the appendix and formal proofs are presented in a technical report [3]. The hardness results naturally extend to the average delay metric for both the online as well as computationally limited algorithms. Finally, traditional optimization frameworks for routing [14] and congestion control [21] based on fluid models appear difficult to extend to DTNs due to the inherently high feedback delay, uncertainity about network conditions, and the discrete nature of transfer opportunities that are more suited for transferring large bundles rather than small packets. 3.3 RAPID design rapid models DTN routing as a utility-driven resource allocation problem. A packet is routed by replicating it until a copy reaches the destination. The key question is: given limited bandwidth, how should packets be replicated in the network so as to optimize a specified routing metric. rapid derives a per-packet utility function from the routing metric. At a transfer opportunity, it replicates a packet that locally results in the highest increase in utility. Consider a routing metric such as minimize average delay of packets, the running example used in this section. The corresponding utility U i of packet i is the negative of the expected delay to deliver i, i.e., the time i has already spent in the system plus the additional expected delay before i is delivered. Let δu i denote the increase in U i by replicating i and s i denote the size of i. Then, rapid replicates the packet with the highest value of δu i s i among packets in its buffer, in other words the packet with the highest marginal utility. In general, U i is defined as the expected contribution of i to the given routing metric. For example, the metric minimize average delay is measured by summing the delay of packets. Accordingly, the utility of a packet is its expected delay. Thus, rapid is a heuristic based on locally optimizing marginal utility, i.e., the expected increase in utility per unit resource used. rapid replicates packets in decreasing order of their marginal utility at each transfer opportunity. The marginal utility heuristic has some desirable properties. The marginal utility of replicating a packet to a node is low when (i) the packet has many replicas, or (ii) the node is a poor choice with respect to the routing metric, or (iii) the resources used do not justify the benefit. For example, if nodes meet each other uniformly, then a packet i with 6 replicas has lower marginal utility of replication compared to a packet j with just 2 replicas. On the other hand, if the peer is unlikely to meet j s destination for a long time, then i may take priority over j. rapid has three core components: a selection algorithm, an inference algorithm, and a control channel. The selection algorithm is used to determine which packets to replicate at a transfer opportunity given their utilities. The inference Protocol rapid(x, Y ): 1. Initialization: Obtain metadata from Y about packets in its buffer and metadata Y collected over past meetings (detailed in Section 4.2). 2. Direct delivery: Deliver packets destined to Y in decreasing order of their utility. 3. Replication: For each packet i in node X s buffer (a) If i is already in Y s buffer (as determined from the metadata), ignore i. (b) Estimate marginal utility, δu i, of replicating i to Y. (c) Replicate packets in decreasing order of δu i s i. 4. Termination: End transfer when out of radio range or all packets replicated. algorithm is used to estimate the utility of a packet given the routing metric. The control channel propagates the necessary metadata required by the inference algorithm. 3.4 The selection algorithm The rapid protocol executes when two nodes are within radio range and have discovered one another. The protocol is symmetric; without loss of generality, and describes how node X determines which packets to transfer to node Y (refer box marked Protocol rapid). rapid also adapts to storage restrictions for in-transit data. If a node exhausts all available storage, packets with the lowest utility are deleted first as they contribute least to overall performance However, a source never deletes its own packet unless it receives an acknowledgment for the packet. 3.5 Inference algorithm Next, we describe how Protocol rapid can support specific metrics using an algorithm to infer utilities. Table 2 defines the relevant variables Metric 1: Minimizing average delay To minimize the average delay of packets in the network we define the utility of a packet as U i = D(i) (1) since the packet s expected delay is its contribution to the performance metric. Thus, the protocol attempts to greedily replicate the packet whose replication reduces the delay by the most among all packets in its buffer Metric 2: Minimizing missed deadlines To minimize the number of packets that miss their deadlines, the utility is defined as the the probability that the packet will be delivered within its deadline: j P (a(i) < L(i) T (i)), L(i) > T (i) U i =, otherwise where L(i) is the packet life-time. A packet that has missed its deadline can no longer improve performance and is thus assigned a value of. The marginal utility is the improvement (2)

5 Algorithm Estimate Delay(X, Q, Z): Node X with a set of packets Q to destination Z estimates the time, A(i), until packet i Q is delivered to Z as follows: Sorted list of packets destined to Z i B bytes (Average transfer size) 1. Sort packets in Q in decreasing order of T (i). Let b(i) be the sum of sizes of packets that precede i, and B the expected transfer opportunity in bytes between X and Z (refer Figure 1). 2. X by itself requires b(i)/b meetings with Z to deliver i. Compute the random variable M X(i) for the corresponding delay as M X(i) = M XZ + M XZ +... b(i)/b times (4) 3. Let X 1,..., X k X be the set of nodes possessing a replica of i. Estimate remaining time a(i) as a(i) = min(m X1 (i),..., M Xk (i)) (5) b(i) bytes (Sum of packets before i) Figure 1: Position of packet i in a queue of packets destined to Z. b d a b d Node W Node X Node Y a b c b1 d1 a1 b2 d2 a2 b3 c1 Node W Node X Node Y 4. Expected delay D(i) = T (i) + E[a(i)] (a) Packet destined to Z buffered at different nodes (b) Delay dependancies between packets destined to node Z in the probability that the packet will be delivered within its deadline, so the protocol replicates the packet that yields the highest improvement among packets in its buffer Metric 3: Minimizing maximum delay To minimize the maximum delay of packets in the network, we define the utility U i as j D(i), D(i) D(j) j S U i = (3), otherwise where S denotes the set of all packets in X s buffer. Thus, U i is the negative expected delay if i is a packet with the maximum expected delay among all packets held by Y. So, replication is useful only for the packet whose delay is maximum. For the routing algorithm to be work conserving, rapid computes utility for the packet whose delay is currently the maximum; i.e., once a packet with maximum delay is considered for replication, the utility of the remaining packets is recalculated using Eq ESTIMATING DELIVERY DELAY How does a rapid node estimate expected delay in Eqs. 1 and 3, or the probability of packet delivery within a deadline in Eq. 2? The expected delivery delay is the minimum expected time until any node with the replica of the packet delivers the packet; so a node needs to know which other nodes possess replicas of the packet and when they expect to meet the destination. To estimate expected delay we assume that the packet is delivered directly to the destination, ignoring the effect of further replication. This estimation is nontrivial even with an accurate global snapshot of system state. For ease of exposition, we first present rapid s estimation algorithm as if we had knowledge of the global system state, and then present a practical distributed implementation. 4.1 Algorithm Estimate_Delay Algorithm Estimate Delay works as follows. Each node X maintains a separate queue of packets Q destined to each node Z sorted in decreasing order of T (i) or time since Figure 2: Delay dependencies between packets destined to Z buffered in different nodes. creation the order in which they would be delivered directly (in Step 2 of protocol rapid). Step 2 in Estimate Delay computes the delay distribution for delivery of the packet by X, as if X were the only node carrying a replica of i. Step 3 computes the minimum across all replicas of the corresponding delay distributions, as the remaining time a(i) is the time until any one of those nodes meets Z. Estimate Delay makes a simplifying independence assumption that does not hold in general. Consider Figure 2(a), an example showing the positions of packet replicas in the queues of different nodes; packets with the same letter and different indices are replicas. All packets have a common destination Z and each queue is sorted by T (i). Assume that the size of each transfer opportunity is one packet. Packet b may be delivered in two ways: (i) if W meets Z, or (ii) one of X and Y meets Z and then one of X and Y meet Z again. These delay dependencies can be represented using a dependency graph as illustrated in Fig 2(b). A vertex corresponds to a packet replica. An edge from one node to another indicates a dependency between the delays of the corresponding packets. Recall that M XY is the random variable that represents the meeting time between X and Y. Estimate Delay ignores all the non-vertical dependencies. For example, it estimates b s delivery time distribution as min(m W Z, M XZ + M XZ, M Y Z + M Y Z) whereas the distribution is actually min(m W Z, min(m XZ, M Y Z) + min(m XZ, M Y Z)) Although, in general, the independence assumption can arbitrarily inflate delay estimates 1, it makes our implementation: (i) simple computing an accurate estimate is much more complex especially when transfer opportunities are not unit-sized as above; and (ii) distributed in practice, rapid 1 Pathological cases are exemplified in the technical report[3].

6 does not have global view, but Estimate Delay can be implemented using a thin in-band control channel Exponential distributions We walk through the distributed implementation of Estimate Delay for a scenario where the inter-meeting time between nodes is exponentially distributed. Further, suppose all nodes meet according to a uniform exponential distribution with mean time 1/. In the absence of bandwidth restrictions, the expected delivery delay when there are k replicas is the mean meeting time divided by k, i.e., P(a(i) < t) = 1 e kt and A(i) = 1. (Note that the k minimum of k i.i.d. exponentials is also an exponential with mean 1/k of the mean of the i.i.d exponentials [8].) However, when transfer opportunities are limited, the expected delay depends on the packet s position in nodes buffers. In Step 2 of Estimate Delay, the time for some node X to meet the destination b(i)/b times is described by a gamma distribution with mean 1 b(i)/b. If packet i is replicated at k nodes, Step 3 computes the delay distribution a(i) as the minimum of k gamma variables. We do not know of a closed form expression for the minimum of gamma variables. Instead, if we assume that the time taken for a node to meet the destination b(i)/b times is exponential with the same mean 1 b(i)/b, we can again estimate a(i) as the minimum of k exponentials as follows. Let n 1(i), n 2(i),..., n k (i) be the number of times each of the k nodes respectively needs to meet the destination to deliver i directly. Then A(i) is computed as: P(a(i) < t) = 1 e ( A(i) = n 1 (i) + n 1 (i) + n 2 (i) n 2 (i) n k (i) )t (6) n k (i) When the meeting time distributions between nodes are non-uniform, say with means 1/ 1, 1/ / k respectively, then A(i) = ( k n 1 (i) n 2 (i) n k (i) ) Unknown mobility distributions To estimate mean inter-node meeting times in the vehicular DTN testbed, every node tabulates the average time to meet every other node based on past meeting times. Nodes exchange this table as part of metadata exchanges (Step 1 in Protocol rapid). A node combines the metadata into a meeting-time adjacency matrix and the information is updated after each transfer opportunity. The matrix contains the expected time for two nodes to meet directly, calculated as the average of past meetings. Node X estimates E(M XZ), the expected time to meet Z, using the meeting-time matrix. E(M XZ) is estimated as the expected time taken for X to meet Z in at most h hops. (Unlike uniform exponential mobility models, some nodes in the trace never meet directly.) For example, if X meets Z via an intermediary Y, the expected meeting time is the expected time for X to meet Y and then Y to meet Z in 2 hops. In our implementation we restrict h = 3. When two nodes never meet, even via three intermediate nodes, we set the expected inter-meeting time to infinity. Several DTN routing protocols [6, 23, 7] use similar techniques to estimate meeting probability among peers. Let replicas of packet i destined to Z reside at nodes X 1,..., X k. Since we do not know the meeting time distributions, we simply assume they are exponentially distributed. (7) Then from Eq. 7, the expected delay to deliver i is A(i) = [ kx j=1 1 E(M Xj Z) n j(i) ] 1 (8) Why exponential? The distribution of bus meeting times in the DTN testbed is very difficult to model. Buses change routes several times in one day, the inter-bus meeting distribution is noisy and we found them hard to model even using mixture models [3]. Approximating meeting times as exponentially distributed makes delay estimates easy to compute and performs well in practice. 4.2 Control channel Previous studies [19] have shown that as nodes avail more information about global system state and future from oracles, they can make significantly better routing decisions. We extend this idea to practical DTNs where no oracle is available. To this end, rapid nodes gather knowledge about the global system state by disseminating metadata using a fraction of the transfer opportunity. rapid uses an in-band control channel to exchange acknowledgments for delivered packets as well as metadata about every packet learnt from past exchanges. For each encountered packet i, rapid maintains a list of nodes that carry the replica of i, and for each replica an estimated time for direct delivery. Metadata for delivered packets is deleted when an ack is received. For efficiency, a rapid node maintains the time of last metadata exchange with its peers. The node only sends information about packets whose information changed since the last exchange, which reduces the size of the exchange considerably. A rapid node sends the following information on encountering a peer: Average size of past transfer opportunities; Expected meeting times with nodes; List of packets delivered since last exchange; For each of its own packets, the updated delivery delay estimate based on current buffer state; Information about other packets if modified since last exchange with the peer. When using the control channel, nodes have only an imperfect view of the system. The propagated information may be stale due to change is number of replicas, changes in delivery delays, or if the packet is delivered but acknowledgments have not propagated. Nevertheless, our experiments confirm that (i) this inaccurate information is sufficient for rapid to achieve significant performance gains over existing protocols, and (ii) the overhead of metadata itself is minimal. 5. IMPLEMENTATION ON A VEHICULAR DTN TESTBED We implemented and deployed rapid on a vehicular DTN testbed, DieselNet [6] ( consisting of 4 buses, of which a subset is on the road each day. The implementation allowed us to meet the following objectives: (i) the routing protocol is a first step towards deploying realistic DTN applications on the testbed (ii) the deployment is subject to some events that are not perfectly modeled in the simulation, including delays caused by computation or wireless idiosyncrasies.

7 Each bus in DieselNet carries a small-form desktop computer, 4 GB of storage, and a GPS device. The buses operate a 82.11b radio that scans for other buses 1 times a second and an 82.11b access point (AP) that accepts incoming connections. Once a bus is found, a connection is created to the remote AP. (It is likely that the remote bus then creates a connection to the discovered AP, which our software merges into one connection event.) The connection lasts until the radios are out of range. Burgess et al. [6] describe the Dieselnet testbed in more detail. 5.1 Deployment Buses in Dieselnet send messages using protocol rapid in Section 3, computing the metadata as described in Section 4.2. We generated packets of size 1 KB periodically on each bus with an exponential inter-arrival time. The destinations of the packets included only buses that were scheduled to be on the road, which avoided creation of many packets that could never be delivered. We did not provide the buses information about the location or route of other buses on the road. We set the default packet generation rate to 4 packets per hour generated by each bus for every other bus on the road; since the number of buses on the road at any time varies, this is a simple way to express load. For example, when 2 buses are on the road, the default rate is 1,52 packets per hour. During the experiments, the buses logged packet generation, packet delivery, delivery delay, meta-data size, and the total size of the transfer opportunity. Buses transfered random data after all routing was complete in order to measure the capacity and duration of each transfer opportunity. The logs were periodically uploaded to a central server using open Internet APs found by the road. 5.2 Performance of deployed RAPID We measured the routing performance of rapid on the buses from Feb 6, 27 until May 14, The measurements are tabulated in Table 3. We exclude holidays and weekends since almost no buses were on the road, leaving 58 days of experiments. rapid delivered 88% of packets with an average delivery delay of about 91 minutes. We also note that overhead due to meta-data accounts for less than.2% of the total available bandwidth and less than 1.7% of the data transmitted. 5.3 Validating trace-driven simulator In the next section, we evaluate rapid using a trace-driven simulator. The simulator takes as input a schedule of node meetings, the bandwidth available at each meeting, and a routing algorithm. We validated our simulator by comparing simulation results against the 58-days of measurements from the deployment. In the simulator, we generate packets under the same assumptions as the deployment, using the same parameters for exponentially distributed inter-arrival times. Figure 3 shows the average delay characteristics of the real system and the simulator. Delays measured using the simulator were averaged over the 3 runs and the error-bars show a 95% confidence interval. From those results and further analysis, we find with 95% confidence that the simulator results are within 1% of the implementation measurement of average delay. The close correlation between system measurement and simulation increases our confidence in the accuracy of the simulator. 2 The traces are available at Avg buses scheduled per day 19 Avg total bytes transfered per day MB Avg number of meetings per day Percentage delivered per day 88% Avg packet delivery delay 91.7 min Meta-data size/ bandwidth.2 Meta-data size/ data size.17 Table 3: Deployment of : Average daily statistics Average Delay (min) Day Real Simulation Figure 3: Trace: Average delay for 58 days of rapid real deployment compared to simulation of rapid using traces 6. EVALUATION The goal of our evaluations is to show that, unlike existing work, rapid can improve performance for customizable metrics. We evaluate rapid using three metrics: minimize maximum delay, minimize average delay, and minimize missed deadlines. In all cases, we found that rapid significantly outperforms existing protocols, and also performs close to optimal for our workloads. 6.1 Experimental setup Our evaluations are based on a custom event-driven simulator, as described in the previous section. The meeting times between buses in these experiments are not known a priori. All values used by rapid, including average meeting times, are learned during the experiment. We compare rapid to five other routing protocols: Max- Prop [6], Spray and Wait [32], Prophet [23],, and Optimal. In all experiments, we include the cost of rapid s in-band control channel for exchanging metadata. MaxProp operates in a storage- and bandwidth-constrained environment, allows packet replication, and leverages delivery notifications to purge old replicas; of recent related work, it is closest to rapid s objectives. replicates randomly chosen packets for the duration of the transfer opportunity. Spray and Wait restricts the number of replications of a packets to L, where L is calculated based on the number of nodes in the network. For our simulations, we implemented the binary Spray and Wait and set L = We implemented Prophet with parameters P init =.75, β =.25 and γ =.98 (parameters based on values used in [23]). 3 We set this value based on consultation with authors and using LEMMA 4.3 in [32] with a = 4.

8 Avg delay (min) MaxProp 2 Spray and Wait % delivered MaxProp Spray and Wait Max Delay (min) MaxProp Spray and Wait Figure 4: (Trace) Average Delay: Figure 5: (Trace) Delivery Rate: RAPID has up to 2% lower delay than MaxProp and up to 35% lower delay than RAPID delivers up to 14% more than MaxProp, 28% than Spray and Wait and 45% than Figure 6: (Trace) Max Delay: Maximum delay of RAPID is up to 9 min lower than MaxProp, Spray and Wait, and Power law Trace-driven Number of nodes 2 max of 4 Buffer size 1 KB 4 GB Average transfer opp. size 1 KB given by real transfers among buses Duration 15 min 19 hours each trace Size of a packet 1 KB 1 KB Packet generation rate 5 sec mean 1 hour Delivery deadline 2 sec 2.7 hours Table 4: Experiment parameters We also compare rapid to Optimal, the optimal routing protocol that provides an upper bound on performance. We also perform experiments where mobility is modeled as a power law distribution. Previous studies [9, 22] have suggested that DTNs among people have a skewed, power law inter-meeting time distribution. The default parameters used for all the experiments are tabulated in Table 4. The parameters for power law mobility model is different from the trace-driven model because the performances from the two models are not comparable. Each data point is averaged over 1 runs; in the case of trace-driven results, the results are averaged over 58 traces. Each of the 58 days is a separate experiment. In other words, packets that are not delivered by the end of the day are lost. In all experiments, MaxProp, rapid and Spray and Wait performed significantly better than Prophet, and the latter is not shown in the graphs for clarity. In all trace experiments, Prophet performed worse than the three routing protocols for for all loads and all metrics. 6.2 Results based on testbed traces Comparison with existing routing protocols Our experiments show that rapid consistently outperforms MaxProp, Spray and Wait and. We increased the load in the system up to 4 packets per hour per destination, when delivers less than 5% of the packets. Figure 4 shows the average delay of delivered packets using the four protocols for varying loads when rapid s routing metric is set to minimize average delay (Eq. 1). When using rapid, the average delay of delivered packets is significantly lower than MaxProp, Spray and Wait and. Moreover, rapid also consistently delivers a greater fraction of packets as shown in Figure 5. Figure 6 shows rapid s performance when the routing metric is set to minimize maximum delay (Eq. 3) and similarly Figure 7 shows results when the metric is set to maximize the number of packets delivered within a deadline (Eq. 2). We note that among MaxProp, Spray and Wait and, MaxProp delivers the most number of packets, but Spray and Wait has marginally lower average delay than MaxProp. rapid significantly outperforms the three protocol for all metrics because of its intentional design. Standard deviation and similar measures of variance are not appropriate for comparing the mean delays as each bus takes a different geographic route. So, we performed a paired t-test [8] to compare the average delay of every source-destination pair using rapid to the average delay of the same source-destination pair using MaxProp (the second best performing protocol). In our tests, we found p-values always less than.5, indicating the differences between the means reported in these figures are statistically significant Metadata exchange We allow rapid to use as much bandwidth at the start of a transfer opportunity for exchanging metadata as it requires. To see if this approach was wasteful or beneficial, we performed experiments where we limited the total metadata exchanged. Figure 8 shows the average delay performance of rapid when metadata is limited as a percentage of the total bandwidth. The results show that performance increases as the limit is removed, and the best peformance results when there is no restriction on metadata at all. The performance of rapid with complete metadata exchange improves by 2% compared to when no metadata is exchanged. The metadata in this experiment is represented as a percentage of available bandwidth. In the next experiment, we analyze total metadata as a percentage of data. In particular, we increase the load to 75 packets per destination per hour to analyze the trend in terms of bandwidth utilization, delivery rate and metadata. Figure 9 shows this trend as load increases. The bandwidth utilization is about 35% for the load of 75 packets per hour per destination, while delivery rate is only about 65%. This suggests that the performance drops even though the network is under-utilized, and is because of the bottleneck links in the network. The available bandwidth varies significantly across transfer opportunities in our bus traces [6]. We also observe that metadata increases to about 4% of data for high loads. This is an order of magnitude higher

9 % delivered within deadline MaxProp.1 Spray and Wait Average Delay (min) Load: 6 packet per hour per node Load: 12 packet per hour per node 2 Load: 2 packet per hour per node Percentage Meta information/rapid data % channel utilization Delivery rate Delivery rate Percentage Metadata (of the available bandwidth) Figure 7: (Trace) Delivery within deadline: RAPID delivers up to 21% more than MaxProp, 24% than Spray and Wait, 28% than Figure 8: (Trace) Control channel benefit: Average delay performance improves as more metadata is allowed to be exchanged Figure 9: (Trace) Channel utilization: As load increases, delivery rate decreases to 65% but channel utilization is only about 35% than the metadata observed as a fraction of bandwidth, again because of the poor channel utilization. The average metadata exchange per contact is proportional to the load and the channel utilization. However, metadata enables efficient routing and helps remove copies of packets that are already delivered, increasing the overall performance of rapid. Moving from 1 KB to 1 KB packets will reduce rapid s metadata overhead by another order of magnitude Hybrid DTN with thin continuous connectivity In this section, we compare the performance of rapid using an instant global control channel for exchanging metadata as opposed to the default (delayed) in-band control channel. Figure 1 shows the average delay of rapid when using an in-band control channel compared to a global channel. We observe that the average delay decreases by up to 2 minutes when using a global channel. Similarly, from Figure 11 we observe that the percentage packets delivered within a deadline increases by an average of 2% using a global channel. This observation suggests that rapid s performance can benefit further by using more control information. One interpretation of the global channel is the use of rapid as a hybrid DTN where all control traffic goes over a lowbandwidth, long-range radio such as XTEND [4]. A hybrid DTN will use a high-cost low-bandwidth channel for control whenever available and low-cost high-bandwidth delayed channel for data. In our experiments, we assumed that the global channel is instant. While this may not be feasible in practice, the results give an upper bound on the rapid s performance when accurate channel information is available Comparison with Optimal We compare rapid to Optimal, which is an upper bound on the performance. To obtain the optimal delay, we formulate the DTN routing problem as an Integer Linear Program (ILP) optimization problem when the meeting times between nodes are precisely know. The optimal solution does not use replication, assumes that the propagation delay of all links are equal, and assumes that node meetings are known in advance. We present a formulation of this problem in the technical report [3]. Our evaluations use the CPLEX solver [11]. Because the solver grows in complexity with the number of packets, these simulations are limited to only 6 packets per hour per destination. Jain et al. [19] solve a more general DTN routing problem by allowing packets to be fragmented across links and assigning non-zero propagation delays on the links, however, this limited the size of the network they could evaluate even more. Our ILP objective function minimizes delay of all packets, where the delay of undelivered packets is set to time the packet spent in the system. Accordingly, we add delay of undelivered packets when presenting the results for rapid and MaxProp. Figure 12 presents the average delay performance of Optimal, rapid, and MaxProp. We observe that for small loads, the performance of rapid using the in-band control channel is within 1% of the optimum performance, while using MaxProp the delays are about 22% from the optimal. rapid using a global channel performs within 6% of optimal Evaluation of rapid components rapid is comprised of several components that all contribute to performance. We ran experiments to study the value added by each component. Our approach is to compare subsets of the full rapid, cumulatively adding components from. The components are (i) with acks propagation of delivery acknowledgments; and (ii) rapidlocal using rapid but nodes exchange metadata about only packets in their own buffers. Figure 13 shows the performance of different components of rapid when the routing metric is set to minimize average delay. From the figure we observe that using acknowledgments alone improves performance by an average of 8%. In our previous work, MaxProp [6], we show empirically that propagating acknowledgments clears buffers, avoids exchange of already delivered packets and improving performance. In addition, rapid-local provides a further improvement of 1% on average even though metadata exchange is restricted to packets in the node s local buffer. Allowing all metadata to flow further improves the performance by about 11%. 6.3 Results from synthetic mobility models Next, we use a power law mobility model to compare the performance of rapid to MaxProp,, and Spray and Wait. When mobility is modeled using power law, two nodes meet with an exponential inter-meeting time, but the mean of the exponential distribution is determined by the popularity of the nodes. For the 2 nodes, we randomly set a popularity value of 1 to 2, with 1 being most popular. The mean of the power law mobility model is set to.3 seconds and is skewed for each pair of nodes according to their popularity. Figure 14 shows the maximum delay of packets when the load is varied (i.e., rapid is set to use Eq. 3 as a metric).

10 Avg delay (min) In-band control channel Instant global control channel % delivered within deadline In-band control channel Instant global control channel Avg delay with undelivered (min) Optimal : Instant global control channel 2 : In-band control channel Maxprop Figure 1: (Trace) Global channel: Figure 11: (Trace) Global channel: Figure 12: (Trace) Comparison with Average delay of RAPID decreases Packets delivered within deadline increases by about 15% using instant is within 1% of Optimal for small Optimal: Average delay of RAPID by up to 2 minutes using instant global control channel global control channel loads Avg delay (min) : Local 2 : With Acks Figure 13: (Trace) RAPID Components: Flooding acks decreases average delay by about 8% and RAPID further decreases average delay by about 3% over Max Delay (sec) MaxProp 1 Spray and Wait Number of packets generated in 5 sec per destination Figure 14: (Powerlaw) Max delay: RAPID s max delay is about 3% lower than MaxProp, 35% lower than Spray and Wait and 45% lower than % delivered within deadline MaxProp.1 Spray and Wait Available storage (KB) Figure 15: (Powerlaw) Constrained buffer size: RAPID delivers about 2% more packets than MaxProp, and 45% more packets than Spray and Wait and rapid reduces maximum delay by over 3% compared to the other protocols. For both the traces and the synthetic mobility, the performance of rapid is significantly higher than MaxProp, Spray and Wait and for the maximum delay metric. The reason is MaxProp prioritizes new packets; older, undelivered packets won t see service as load increases. Similarly, Spray and Wait does not give preference to older packets. However, rapid specifically prioritizes older packets to reduce maximum delay. Figure 15 shows how constrained buffers varied from 1 KB to 28 KB affect the delivery deadline metric for a fixed load of 2 packets per node per destination every 5 seconds. rapid is able to best manage limited buffers to deliver packets within a deadline. When storage is restricted, MaxProp deletes packets that are replicated most number of times, while Spray and Wait and deletes packets randomly. rapid, when set to maximizing number of packets delivered within a deadline, deletes packets that are most likely to miss the deadline and is able to improve performance significantly. Similar trends were observed for other routing metrics for increasing load and decreasing buffer size and when the node meeting time was modeled as an exponential distribution. These results are presented in the technical report [3]. 6.4 Limitations The above experiments show that rapid performs well from many viewpoints. However, there are limitations to our approach. The heuristics we use are sub-optimal solutions and although they seek to maximize specific utilities, we can offer no performance guarantees. Our estimations of delay are based on simple, tractable distributions. Finally, we note that our implementation of rapid shows that the protocol can be deployed efficiently and effectively; however, in other DTN scenarios or testbeds, mobility patterns may be more difficult to learn. In future work, we believe a more sophisticated estimation of delay will improve our results, perhaps bringing us closer to guarantees of performance. The release of an implementation of rapid will enable us to enlist others to deploy rapid on their DTNs, diversifying results to other scenarios. 7. CONCLUSIONS Previous work in DTN routing protocols has seen only incidental performance improvement from various routing mechanisms and protocol design choices. In contrast, we have proposed a routing protocol for DTNs that intentionally maximizes the performance of a specific routing metric. Our protocol, rapid, treats DTN routing as a resource allocation problem, making use of an in-band control channel to propagated metadata. Although our approach is heuristic, we have proven that the general DTN routing protocol lacks sufficient information in practice to solve optimally. Moreover, we have shown that even when complete knowledge is available, solving the DTN routing problem optimally is NPhard. Our deployment of rapid in a DTN testbed illustrates that our approach is realistic and effective. We have shown through trace-driven simulation using 58 days of testbed measurements that rapid yields significant performance gains over many existing protocols.

12 intermediate nodes u 1,..., u n that each subsequently meet a unique destination from among v 1,..., v n at time T 2. S must deliver n packets p 1,..., p n destined respectively to v 1,..., v n. Intermediaries u1 Destinations v1 v 2 and p 2 to v 1, ADV simply chooses the opposite strategy putting ALG in the same predicament. The basic gadget can be extended by composing two additional basic gadgets to force ALG to deliver at most 2/5 th of the packets. We show in the full proof that by constructing a gadget of depth i, ADV can force ALG to deliver at most a i/(3i 1) fraction of packets. Packets P = {p1,p2,...,pn} pi destined to vi Source S u2 v2 It is an open question whether there exists a constantcompetitive online algorithm that knows the complete node meeting schedule, but not the packet workload. The proofs above suggest, but do not prove, that not knowing the schedule is more damning than not knowing the workload. T1 Figure 16: Theorem 1(a) construction: Solid arrows represent node meetings ALG known a priori while dotted arrows represent meetings ADV generates and reveals subsequently. ALG is effectively forced to guess the permutation corresponding to the latter set of meetings between u 1,..., u n and v 1,..., v n. The best strategy for ALG is to replicate one packet to all nodes in u 1,..., u n, and this strategy allows it to deliver exactly one packet. ADV on the other hand knows the latter n meetings a priori and can therefore route all n packets to their respective destinations. Packets p1,p2 Source S p1 p2 T1 p'1 p'2 v'1 v'2 un-1 un T2 Basic gadget Figure 17: Theorem 1(b) basic gadget: Solid arrows represent node meetings ALG knows a priori while dotted squiggly arrows represent packets ADV generates and reveals subsequently. Theorem 1(b). If ALG has complete knowledge of the meeting schedule, but not of the packet workload, then it can deliver at most a third of packets compared to an optimal offline adversary. Proof outline. We construct an offline adversary, ADV, that incrementally generates a packet workload by observing ALG s transfers at each step. Consider the scenario in Figure 17 that we refer to as the basic gadget. The source S initially has two packets p 1, p 2 that it must deliver respectively to destinations v 1 and v 2 via the intermediate nodes v 1 and v 2. Each solid arrow represents a unit-sized transfer opportunity between the corresponding nodes. ADV can use this gadget to force ALG to drop two packets by creating two additional packets p 1 and p 2 while ADV itself delivers all four packets as follows. If ALG transfers p 1 to v 1 and p 2 to v 2 at time T 1, then ADV generates two more packets at time T 2: p 2 at v 1 destined to v 2 and p 1 at v 2 destined to v 1. ALG is forced to drop one of the two packets at both v 1 and v 2. Instead, if ALG chooses to transfer p 1 to T2 v1 v2 vn-1 vn Theorem 2. Given complete knowledge of node meetings and the packet workload a priori, computing a routing schedule that is optimal with respect to the number of packets delivered is NP-hard with an Ω( p (n)) lower bound on approximability. Proof outline. We show that the DTN routing problem given complete knowledge (of both the node meeting schedule and packet workload) is NP-hard, by reducing the edgedisjoint paths (EDP) problem to the DTN routing problem. The EDP problem for a DAG is known to be NP-hard [1]. The EDP problem is: given a DAG G = (V, E) and sourcedestination pairs {(r 1, d 1)...(r s, d s)}, compute the largest subset of the source-destination pairs with edge-disjoint paths between them. The decision version of the EDP problem is: is there a subset of k source-destination pairs with edgedisjoint paths between them. We reduce this problem to the decision version of the DTN routing problem. To reduce the EDP problem, G = (V, E), to the DTN routing problem (refer Section 3.1), we topologically sort the DAG G in polynomial time and label the edges of the DAG such that the labels strictly increase along the edges. The label function l maps every edge to a natural number. We map the vertices V to the nodes in the DTN network. We map each edge e = (u, v) to the transfer opportunity (u, v, 1, l(e)). i.e., nodes u and v have a unit-sized transfer opportunity at time l(e). We map the source destination pairs (r i, d i) to a packet p i from source r i to destination d i, with unit size and created at time. Since the transfer opportunities are unit-sized, at most one packet can be transfered using each opportunity. The solution to the DTN routing problem transfers p i from u i to v i using a series of opportunities; the path formed by the transfers is a valid edge-disjoint path between the corresponding source-destination pairs r i and d i in the DAG G of the EDP problem. Using this mapping, we reduce EDP to the DTN routing problem. The reduction above is a true reduction in the following sense: each successfully delivered DTN packet corresponds to an edge-disjoint path and vice-versa. Thus, the optimal solution for one exactly corresponds to an optimal solution for the other. We can show formally that this reduction is an L-reduction [28]. Consequently, the lower bound Ω(n 1/2 ɛ ) known for the hardness of approximating the EDP problem [16] holds for the DTN routing problem as well.

On the Interaction and Competition among Internet Service Providers Sam C.M. Lee John C.S. Lui + Abstract The current Internet architecture comprises of different privately owned Internet service providers

137 CHAPTER 8 CONCLUSION AND FUTURE ENHANCEMENTS 8.1 CONCLUSION In this thesis, efficient schemes have been designed and analyzed to control congestion and distribute the load in the routing process of

Riikka Susitaival and Samuli Aalto. Adaptive load balancing with OSPF. In Proceedings of the Second International Working Conference on Performance Modelling and Evaluation of Heterogeneous Networks (HET

CMSC 858T: Randomized Algorithms Spring 2003 Handout 8: The Local Lemma Please Note: The references at the end are given for extra reading if you are interested in exploring these ideas further. You are

Stability of QOS Avinash Varadarajan, Subhransu Maji {avinash,smaji}@cs.berkeley.edu Abstract Given a choice between two services, rest of the things being equal, it is natural to prefer the one with more

152 APPENDIX 1 USER LEVEL IMPLEMENTATION OF PPATPAN IN LINUX SYSTEM A1.1 INTRODUCTION PPATPAN is implemented in a test bed with five Linux system arranged in a multihop topology. The system is implemented

CHAPTER 2 QoS ROUTING AND ITS ROLE IN QOS PARADIGM 22 QoS ROUTING AND ITS ROLE IN QOS PARADIGM 2.1 INTRODUCTION As the main emphasis of the present research work is on achieving QoS in routing, hence this

A Binary Feedback Scheme for Congestion Avoidance in Computer Networks K. K. RAMAKRISHNAN and RAJ JAIN Digital Equipment Corporation We propose a scheme for congestion auoidunce in networks using a connectionless

Path Optimization in Computer Networks Roman Ciloci Abstract. The main idea behind path optimization is to find a path that will take the shortest amount of time to transmit data from a host A to a host

Chapter 4 VoIP Metric based Traffic Engineering to Support the Service Quality over the Internet (Inter-domain IP network) 4.1 Introduction Traffic Engineering can be defined as a task of mapping traffic

Chapter 5 Prediction of DDoS Attack Scheme Distributed denial of service attack can be launched by malicious nodes participating in the attack, exploit the lack of entry point in a wireless network, and

Approximation Algorithms Chapter Approximation Algorithms Q. Suppose I need to solve an NP-hard problem. What should I do? A. Theory says you're unlikely to find a poly-time algorithm. Must sacrifice one

Approximation Algorithms Chapter Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of

CS787: Advanced Algorithms Lecture 5: Applications of Network Flow In the last lecture, we looked at the problem of finding the maximum flow in a graph, and how it can be efficiently solved using the Ford-Fulkerson

WHITE PAPER Improved Digital Media Delivery with Telestream THE CHALLENGE Increasingly, Internet Protocol (IP) based networks are being used to deliver digital media. Applications include delivery of news

Parametric Analysis of Mobile Cloud Computing using Simulation Modeling Arani Bhattacharya Pradipta De Mobile System and Solutions Lab (MoSyS) The State University of New York, Korea (SUNY Korea) StonyBrook

Quantifying the Performance Degradation of IPv6 for TCP in Windows and Linux Networking Burjiz Soorty School of Computing and Mathematical Sciences Auckland University of Technology Auckland, New Zealand

Comparing Two Models of Distributed Denial of Service (DDoS) Defences Siriwat Karndacharuk Computer Science Department The University of Auckland Email: skar018@ec.auckland.ac.nz Abstract A Controller-Agent