DUP: Dynamic-tree Based Update Propagation in Peer-to-Peer Networks

Transcription

1 : Dynamic-tree Based Update Propagation in Peer-to-Peer Networks Liangzhong Yin and Guohong Cao Department of Computer Science & Engineering The Pennsylvania State University University Park, PA {yin, Abstract Peer-to-peer networks have received considerable attention due to properties such as scalability, availability, and anonymity. In peer-to-peer networks, indices are used to map data id to nodes that host the data. Previous work showed that the performance of locating data in peer-to-peer networks can be improved by passively caching passing-by indices. The performance can be further improved by actively pushing indices to interested nodes. This paper proposes a Dynamic-tree based Update Propagation () scheme to propagate the indices in peer-to-peer networks. dynamically builds the update propagation tree to facilitate the propagation of indices. Because the update propagation tree only involves nodes that are essential for update propagation, the overhead of is very small. Simulation results show that not only reduces the overall cost of the index propagation and index access, but also results in much lower index query latency compared to existing schemes. I. INTRODUCTION In peer-to-peer networks, one important problem is to locate data (content) among nodes throughout the Internet. This problem is complicated by the facts that the number of nodes in peer-to-peer networks is huge and their contents change from time to time. Traditional peer-to-peer networks such as Napster [1] use centralized servers to map a data id to the node that hosts the data. Clearly, this approach creates a single point of failure and does not scale well. To address this problem, distributed approaches such as CAN [2] and Chord [3] have been proposed. These approaches distribute the mapping throughout the network so that locating data can be performed in a distributed fashion. When a node needs to locate a data object, its request is routed through the network to search for the node that is maintaining the mapping information for that object. An index that indicates the address of the node hosting the data is then sent back to the requesting node, which retrieves the data from the hosting node directly. This work was supported in part by the National Science Foundation (CAREER CCR and ITR ). Peer-to-peer networks can be divided into two categories according to their index searching methods: structured networks [2], [3], [4], [5] and unstructured networks [6], [7], [8], [9]. In structured networks, the queries for indices are routed along a well-defined path to reach the node which maintains the mapping information for the requested data. These search paths form a tree, which is called the index search tree. In unstructured network, there does not exist well-defined query search paths. Requests are sent out using schemes such as flooding. This paper is based on the structured peerto-peer networks. To reduce the index query latency, the index can be cached by intermediate nodes along the query path so that intermediate nodes can serve queries for the index later [2], [3], [6], [10], [11]. There are two widely used cache consistency models: weak consistency [12] and strong consistency [13], [14]. In the weak consistency model, stale indices might be returned to the requesting node. In the strong consistency model, after an update, no stale copy of the modified data will ever be returned to the requesting node. The commonly used weak consistency mechanism is TTL-based (Time-To-Live) [12], in which a requesting node considers a cached copy up-todate if its TTL has not expired. For strong cache consistency, invalidation-based and polling-based approaches are used. In the invalidation-based approach, the server keeps track of all nodes that cache the data, and sends invalidation messages to them when the data is changed. In the polling based approach, every time a node requests a data item and there is a cached copy, it first contacts the server to validate the cached copy. Since Polling-based approach may generate significant network traffic [15], the TTL-based approach is widely used for the weak cache consistency model and the invalidation-based approach is used for the strong cache consistency model. Most previous schemes for index caching in peer-topeer networks are based on the weak cache consistency model. When an index passes by a node, it is cached by

2 that node. There is a Time-To-Live (TTL) timer associated with the index. The index will be removed from the cache after its TTL expires. Such index caching scheme is referred to as the Path Caching with Expiration (PCX) scheme. This scheme has two drawbacks. First, a cached index is not usable after the TTL expires, even if the index has not been updated. Second, an index may be updated before the TTL expires, but nodes caching the index may not know and still use the stale index. The cache performance can be improved by actively maintaining cache consistency, and then a new query can always use the cached indices. However, the invalidation-based approach can not be directly applied to peer-to-peer networks due to scalability issues. Since there are huge number of nodes in peer-to-peer networks, it is impossible for the node that maintains the mapping information to keep track of all nodes caching the index. Therefore, the task of tracking caching nodes and pushing updates should be distributed in the network. Further, because the index size is very small, to do cache invalidation, the updated index should be sent so that caching nodes need not request for the updated index again. Following these ideas, we propose a Dynamic-tree based Update Propagation () scheme. In, on top of the existing index search tree, a dynamic update propagation tree is constructed. This propagation tree contains only those nodes that are either interested in the index or essential for propagating the updates. By propagating the updates along the tree, the index query cost is reduced and the performance is improved. The rest of the paper is organized as follows: Section II introduces the system model and analyzes existing schemes. In Section III, we present the technical details of. Section IV evaluates the proposed scheme through extensive simulations. Section V discusses the related work, and Section VI concludes the paper. II. PRELIMINARIES A. System Model In peer-to-peer networks, a data object can be searched by its name, usually called Key. In the structured peerto-peer network, the network relies on a hash function to map the key to a virtual space. Each node in the network is responsible for part of this virtual space. It maintains the (key, value) pair for all keys that fall into its responsible area. The value in the pair indicates the nodes that host the data corresponding to the key. A node is the Authority Node of the (key, value) pairs it maintains. Data is inserted or removed from nodes in the network from time to time, and nodes may join or leave the network at any time. When such a change happens, the node that hosts the data should inform the authority node. It also needs to send keep-alive messages periodically to the authority node to deal with node failures. The authority node needs to update the index, i.e., the (key, value) pair, whenever it receives update messages or considers the node hosting the data is dead because it did not receive the keep-alive message from the node for a specific amount of time. B. The Update Propagation Scheme When the index is updated by the authority node, such update should be propagated to the nodes that cache the index to reduce the query latency. Roussopoulos and Baker [11] proposed a Controlled Update Propagation () scheme which actively pushes the updated indices to interested nodes along the index search tree. In this scheme, each node needs to record the interests of its neighboring nodes in the index search tree and push updated index to them when necessary. Based on the benefit and the overhead of pushing the updates, each node determines whether to push the index update further down the tree. As a result, an index is pushed hopby-hop following the index search tree to reach interested nodes. N 1 N 2 N 3 Figure 1. N 4 N N 6 5 An index search tree for key K Figure 1 can be used as an example to analyze the performance of. As suggested in [3], [11], the number of hops that a packet needs to travel can be used to represent the cost. Node N 1 is the authority node for key K. Suppose the updated index of K has been pushed to node N 5. One the following three cases is possible when the index is pushed from N 5 to N 6 : 1) N 6 does not access the index before the index s TTL expires. In this case, the index is not useful due to TTL expiration. Therefore, pushing the index increases the cost by one hop. 2) N 6 accesses the index exactly once before it expires. In this case, pushing the index to N 6 reduces the cost by 50% as it costs one hop to push the index to N 6 while if the index is not pushed to N 6,it costs two hops to send the query to N 5 and get the reply. N 7 N 8

3 3) N 6 accesses the index more than once before it expires. Intuitively, pushing the index can further reduce the cost in this case. However, this may not be true if N 6 is able to cache the index. When the index is not pushed from N 5 to N 6,thefirst query from N 6 needs to spend two hops to send the query to N 5 and get the reply. Then the subsequent queries are served directly by N 6 as it now has the index in the cache. Therefore, pushing the index can at most reduce the cost by 50% when nodes cache their requested data. has performance limitations since it pushes the update along the query path. Intermediate nodes along the path receive the updated index even if they do not need it. For example, in Figure 1, if N 6 is the only node that is interested in the index update, the index is still pushed through the path N 2 N 3 N 5 N 6. If intermediate nodes decide to stop forwarding the index, N 6 is cut off from the update information. This incurs long delay and high cost when N 6 needs to access the index. III. DYNAMIC-TREE BASED UPDATE PROPAGATION Although performs better than PCX, its performance improvement has some limitations because updates are always propagated through the query path in. To efficiently propagate the index updates, we propose a Dynamic-tree based Update Propagation () scheme. A. Overview of The idea of can be explained by Figure 2. Suppose only N 6 is interested in the index. When an update happens in N 1, N 1 pushes it directly to N 6. As the peerto-peer network is an overlay network on top of the Internet, the physical distance between N 1 and N 6 is not necessarily much longer than that between N 1 and N 2.Such direct push can significantly improve the performance. It only costs one hop to push the update. If the update is not pushed to N 6, it costs eight hops for N 6 to send the request and get the index from N 1 in PCX. Therefore, the cost is reduced by 87.5%. This direct push is illustrated by the solid arrow in Figure 2 (a). The dynamic update propagation tree ( tree) contains only N 1 and N 6. Later, if N 4 is also interested in the index (see Figure 2 (b)), N 1 pushes the index to N 3, the nearest common parent of N 4 and N 6. Then N 3 is in charge of pushing the index to N 4 and N 6. The new tree, which is linked by the solid arrows, contains N 1, N 3, N 4,andN 6. Compared to PCX and, this scheme only costs three hops while PCX costs ten hops and costs five hops to serve N 4 s and N 6 s queries. Our scheme performs better because it takes short-cuts when pushing the updates. In the worst case when no short-cut is available, our scheme falls back to and still performs well. When a node needs to access an index that is not in the local cache, it sends a request to the root through the requester s index search path. Along the path, the first node that has a valid copy of the index serves the query by sending the index along the reverse path. When an update occurs at the root, it pushes the update to its downstream nodes in the tree. Each node that receives the updated index refreshes its cache and repeats the pushing process. Finally, all nodes in the tree receive the updated index and the update propagation ends. B. Maintaining the Tree In this section, we present techniques to maintain the tree. Note that the overhead of maintaining the tree is at most equal to that of maintaining the tree. Both schemes utilize the underlying index search tree, which is necessary for peer-to-peer networks even if or is not adopted. Compared to, our scheme is more efficient since it reduces the number of nodes participating in the update propagation process. Each node needs to do some book-keeping to support. The following information needs to be maintained for the update propagation. Access Tracking Information: Each node maintains its access tracking information to determine whether it is interested in the index based on the interest measurement policy. In this paper, we adopt a simple policy: if the number of queries a node receives in the last TTL interval is greater than a threshold value c, the node is considered to be interested in the index. Subscriber List: In this list, each node records the node ids of the downstream nodes (including itself) that are interested in the index. It only records the nearest interested node from each of its downstream branches. When a node receives an updated index, it pushes the received index to nodes on the list. The following messages are used for the tree maintenance: subscribe(n i ): the subscribe message for N i. unsubscribe(n i ): the unsubscribe message for N i. substitute(n i, N j ): informs the upstream nodes to replace N i with N j in their subscriber lists. Suppose no node is interested in the index initially. If a node, say N 6, finds that it is interested in the index, it adds itself to its subscriber list. Then it either sends out subscribe(n 6 ) explicitly or piggybacks subscribe(n 6 ) by setting the interest bit in the request packet it sends out. This message is routed through the underlying index search tree until it reaches the root N 1. Intermediate nodes along the path add N 6 to their subscriber lists when they receive subscribe(n 6 ). When the subscribe message reaches N 1, N 1 adds N 6 to its subscriber list

4 N 1 N 1 N 1 N 2 N 2 N 2 N 3 N N 6 5 N 8 N 3 N N 6 5 N 8 N 3 N N 6 5 N 8 N 7 N 7 N 7 N 4 N 4 (a) (b) (c) N 4 Figure 2. An evolving dynamic update propagation tree. Nodes linked by arrows are in the tree. Nodes linked by dotted lines are in the virtual path. and pushes the current and future updated index directly to N 6. Nodes that have at least one subscriber form a path, as shown by the dotted line in Figure 2 (a). This path is called the virtual path. Nodes N 2, N 3,andN 5 are in the virtual path but not in the tree. Only N 1 and N 6 are in the tree and involved in the update propagation, which reduces the cost of update propagation. Later, if N 4 finds that it wants to be informed of the update, it sends out subscribe(n 4 ) to set up a virtual path. When the virtual path reaches N 3, N 3 knows that two nodes from its two different branches are interested in the index 1. N 3 then replaces the message by a substitute(n 6,N 3 ) message to ask upstream nodes to replace N 6 with N 3 in their subscriber list. The first node in the tree that receives this substitute message (e.g., N 1 ) replaces N 6 with N 3 in its subscriber list. After this, N 1 pushes future updates to N 3 and N 3 forwards the updates to N 4 and N 6. If N 6 is no longer interested in the index after it joins the tree, it sends unsubscribe(n 6 ) to the upstream nodes in the index search tree. Upon receiving the unsubscribe message, nodes along the path remove N 6 from their subscriber list. This clears the virtual path for N 6. When the unsubscribe message reaches N 3 in the tree, N 3 stops forwarding the index updates to N 6. If N 3 has only one child N 4 in its subscriber list, it sends a substitute(n 3,N 4 ) to inform the upstream nodes that N 3 no longer needs the update. After the first upstream node in the tree (N 1 in our example) catches this message, it pushes the updates directly to N 4 instead of N 3 (see Figure 2 (c).) One nice property of is the low overhead. The number of subscribers that each node needs to maintain is at most equal to the number of its direct children in the index search tree. For example, in Figure 2 (b), N 3 needs to maintain at most two subscribers. Suppose another 1 The request from N 5 branch is caught by N 5 or N 6 and never reaches N 3. This also explains why the subscriber list needs at most one entry for each downstream branch. descendant, N 5, N 7,orN 8, wants to join the tree. For N 7 or N 8, N 6 takes care of them; for N 5,afterit joins the tree, it replaces N 6 as a subscriber of N 3 and N 5 lists N 6 as its subscriber. The formal description of the algorithm is given in Figure 3. C. Node Arrival, Departure, and Failure Since nodes may join or leave the network at any time, the topology of the index search tree may be changed and the update propagation may be affected. This section extends to deal with such changes. Note that the underlining peer-to-peer network protocol takes care of topology changes of the index search tree. Detailed maintenance operations of the index search tree can be found in [2]. does not interfere with these operations. It only makes necessary adjustments to the tree when the topology changes. Most of these adjustments are kept local to the node that joins or leaves the network and the overhead is small. After a new node joins the network, it is responsible for a subset of indices previously maintained by a neighboring node. In Figure 2 (a), suppose a new node N 3 is inserted between N 3 and N 5 and it takes care of indices that previously belong to N 3. After the insertion, N 3 notifies N 3 that N 6 is in its subscriber list. N 3 inserts N 6 to its subscriber list, and becomes an intermediate node in the virtual path. If the arriving node falls outside of any virtual path, such as between N 6 and N 8, nothing specific needs to be done. A node may leave the network at will or due to node/link failure. If a node N i leaves on its own, it informs its neighbors about its leaving. The neighboring node N j that is chosen to take care of N i s indices acts as N i in the update propagation process after N i left. The only exception is when the leaving node is the end node of a virtual path, e.g., N 6 in Figure 2 (a). In this case N 6 sends an unsubscribe(n 6 ) upstream to clear the virtual path related to it. No specific action needs to be taken if a leaving node does not belong to any virtual path. Dealing with node failure is more complicated. Different methods are used to deal with failed nodes with

5 Notations: S list i : the subscriber list of node N i, initially empty. S list i : the length of S list i. S list i [0]: the only member of S list i when S list i =1. (A) When a query for the index arrives at N i : refresh the access tracking information; if (N i is interested in the index according to its policies and (N i S list i )) then call process subscribe(n i,n i ); send out the request for the index if cache misses; (B) When the subscribe(n j ) message arrives at N i : call process subscribe(n j,n i ); (C) When the substitute(n j,n k ) message arrives at N i : S list i =(S list i {N j }) {N k } if (N i is the root) then return; if ( S list i == 1 ) then /* N i is not in the tree */ forward substitute(n j,n k ) upstream; (D) When N i loses interest for the index: call process unsubscribe(n i,n i ); (E) When unsubscribe(n j ) arrives at N i : call process unsubscribe(n j,n i ); process subscribe(n j,n i ): if (N i is the root) then { S list i = S list i Nj ; return; } if ( S list i == 1) then N k = S list i [0]; /* temporarily save the old subscriber id */ S list i = S list i Nj ; /* S list i is increased by one */ if ( S list i == 1) then /* did not have a subscriber, now has one */ send subscribe(n j ) upstream; else if ( S list i == 2 ) then /* had one subscriber, now two */ send substitute(n k,n i ) upstream; /* replace the old subscriber N k with itself*/ /* if S list i > 2, already in the tree before the message arrives, no extra action needed */ process unsubscribe(n j,n i ): S list i = S list i {N j }; if (N i is the root) then return; if ( S list i == 0) then /* does not have a subscriber */ send unsubscribe(n i ) upstream; else if (S list i == 1) then /* has one subscriber */ send substitute(n i,s list i [0]) upstream; /* replace itself with its subscriber */ /* if S list i > 1, it has two or more subscribers, remains in the tree*/ Figure 3. The algorithm different roles. Figure 2 (b) is used as an example to illustrate how to handle node failures. 1) If the failed node does not belong to any virtual path, no specific action is needed. 2) The failed node is the last node of a virtual path (e.g., N 6 ). In this case, the virtual path from N 6 to N 3 is no longer necessary. The upstream node in the virtual path N 5 can detect this failure and send an unsubscribe(n 6 ) upstream. The upstream nodes process this message according to the algorithm in Figure 3 (E). 3) The failed node is inside a virtual path (e.g, N 5 ) that has one subscriber. This node failure can be detected by its downstream neighbor node N 6 in the virtual path. N 6 deals with this node failure by sending a subscribe(n 6 ) upstream. This message is caught by the node say N i that replaces N 5 2.IfN 5 s previous parent N 3 is also the parent of N i, N i needs to do nothing. Otherwise, N i informs N 3 by sending an unsubscribe(n 6 ).Then N i takes care of the subscribe(n 6 ) according to the algorithm in Figure 3 (B) and N 3 takes care of the unsubscribe(n 6 ) message according to the 2 N i is not shown in Figure 2 (b) to keep its location general.

6 algorithm in Figure 3 (E). 4) The failed node is a node in the tree that has multiple subscribers (e.g., N 3 ). This is similar to the above case except that both N 4 and N 5 send subscribe messages to node N i which replaces N 3. N i processes these messages according to the algorithm in Figure 3 (B). 5) The failed node is the root of the index search tree (e.g., N 1 ). In this case, all indices maintained by N 1 are lost. N 2 can still setup the virtual path and inform the new root N i that it should push the index to N 3. N i starts the update propagation process after receiving the refreshed index information. IV. PERFORMANCE EVALUATIONS Extensive simulations are carried out to evaluate the performance of the proposed scheme and compare it with PCX and. The performance metrics used in this paper are the average query latency and the average query cost as they are widely adopted in previous studies [2], [3], [11], [16]. The average query latency is represented by the average number of hops that a request needs to travel before it reaches a valid index. The average query cost is defined as the total number of hops that the query related messages such as requests, replies and updates traveled in the network divided by the total number of queries. In and, the query cost also includes the messages used to propagate interests. For example, in, extra messages are used to inform neighbors about their interests. In, extra messages are used to maintain the tree, such as the subscribe and unsubscribe messages. In the simulation a peer-to-peer network with n nodes are studied. The maximum degree of the index search tree is D. The number of children for each node is uniformly selected from [1,D]. Different tree topologies are studied in our simulation and the results are similar. Therefore, we present the results based on one randomly generated topology for each D value. The index is maintained at the root node. The TTL of the index is set to be 60 minutes, which is based on the measurement study of peer-to-peer networks [17]. If or is used, the root pushes the updated index to interested nodes exactly one minute before the previous index expires. The latency of message transfer between two nodes follows exponential distribution with mean value of 0.1 seconds. Queries are generated with an arrival rate λ. The query inter-arrival time follows exponential distribution (default) or the heavy-tailed Pareto distribution. Pareto distribution is used in our simulation because recent studies show that some peer-to-peer networks exhibit Pareto query inter-arrival time patterns [10]. In Pareto distribution, the CDF of the inter-arrival time is F (x) = 1 ( k x+k )α, where usually 2 >α>0. Whenα>1,the mean query arrival rate is (α 1)/k. Similar to [11], two α values 1.05 and 1.2 are used to test the effects of Pareto distribution. The scale parameter k of the Pareto distribution is set so that (α 1)/k equals the query arrival rate λ used in each simulation. The queries are distributed to nodes according to Zipflike distribution, which has been frequently used to model non-uniform distribution. In the Zipf-like distribution, the query probability of node N i (1 i n) is 1 represented as P i = i θ n 1. This distribution repre- k=1 k θ sents the situation where a small number of nodes generate most of the queries while other nodes generate only a smaller number of queries. In the simulation, we vary θ in the range of [0.5, 4] to show the effect of access skewness on the system performance. Most system parameters are listed in Table I. The second column lists the default values of these parameters. In the simulation, we may change the parameters to study the impacts of these parameters. The ranges of the parameters are listed in the third column. Each simulation is kept running until at least the 95% confidence interval of the query latency is obtained. The total simulation time is at least 180, 000 seconds. TABLE I SIMULATION PARAMETERS Parameter Default value Range Number of nodes n to Maximum node degree 4 2to10 D Mean query arrival to 100 rate λ (queries per second) Zipf parameter θ to 4 Pareto parameter α N/A 1.05, 1.20 Threshold value c 6 2to10 A. Simulation Results 1) Effects of Threshold Value c: In order to determine whether a node is interested in the index, the threshold value c is introduced in Section III-B. If c is large, few nodes are marked as interested nodes. As a result, few nodes can get the index updates, and the query latency is long. On the other hand, if c is too small, some nodes may still receive the index updates although they will never access the index, which increases the average query cost.

7 TABLE II THE EFFECTS OF THE THRESHOLD VALUE c c value Average query cost (λ =0.1) Average query latency (λ =0.1) Average query cost (λ =1) Average query latency (λ =1) Average query cost (λ = 10) Average query latency (λ =10) Query latency with 95% confidence interval (a) Query rate λ (per second) PCX Relative cost compared to PCX Query rate λ (per second) (b) Figure 4. The performance as a function of the mean query arrival rate λ As shown in Table II, as c increases, the average query cost decreases because fewer nodes are considered interested nodes. One exception is when λ =10queries per seconds, the average query cost first decreases when c increases from 2 to 6 and then the cost increases as c increases from 6 to 10. This is because when the query rate is high, if the threshold value is too large, the nodes that should get the updated index cannot get it. Later queries have to send out requests to get the index, which increases the cost; on the other hand, if the threshold value is too small, more nodes receive the updates and the cost also increases. Overall, we found that a threshold value of 6 achieves a good balance between the query cost and query latency. Therefore, we will use this c value in the rest of our simulations. 2) Effects of Query Arrival Rate λ: In this section, we evaluate the effects of query arrival rate on the system performance, where the inter-arrival time follows an exponential distribution. Figure 4 (a) shows the average query latency with 95% confidence interval as the query arrival rate varies. As the query arrival rate increases, the probability that the index is cached by each node increases, and a cached copy can serve more queries before it expires. Thus, the average query latency decreases. Compared with PCX and, has the lowest query latency because the updates are proactively pushed to interested nodes, and the update propagation speed is faster than in as the updates take short-cuts. Figure 4 (b) shows the relative average cost of and compared to PCX. When the query arrival rate is low, few queries are generated. For example, when λ =1query per second, only one query is generated per second in the whole network with a total of 4096 nodes. We can expect that pushing updates do not perform very well, but still better than PCX. As shown in Figure 4 (b), and reduce the cost by about 20%, and performs better than. As the query arrival rate increases, the performance of both schemes increases. However, the cost of can at most be reduced to about 50% of that of PCX. This agrees with our analysis in Section II-B. Because does not have such performance limitation, it performs much better than. The cost of can be reduced to 20% of PCX when the query arrival rate is high. 3) Effects of the Number of Nodes: In the simulation, we vary the number of nodes to study how the proposed scheme performs as the network size changes. Table III shows the query latency of different schemes under different node sizes and query arrival rates. By checking each row of Table III, we can see that the query latency of all the schemes increases as the number of nodes increases. This is because the average distance

8 TABLE III COMPARISON OF PCX,, AND WHEN THE NUMBER OF NODES CHANGES Number of Nodes PCX Latency (λ =0.1) Latency (λ =0.1) Latency (λ =0.1) PCX Latency (λ =1) Latency (λ =1) Latency (λ =1) PCX Latency (λ =10) Latency (λ =10) Latency (λ =10) from a node in the network to the root increases as the number of nodes increases. By checking each column of Table III, we can see that performs better than PCX and performs the best. In many cases, performs an order of magnitude better than, even though already performs much better than PCX. Figure 5 compares the average access cost of different schemes. It shows that performs better than PCX, but the difference becomes smaller as the number of nodes increases. When the number of nodes increases, more nodes fall between an interested node and the authority node, which incurs larger pushing overhead in. is able to reduce the pushing overhead by skipping unnecessary nodes. Therefore its relative performance compared to PCX still increases when the number of nodes increases. Relative cost compared to PCX Figure The number of nodes The performance as a function of the number of nodes 4) Effects of the Maximum Node Degree D: D determines the maximum number of children each node can have. If D is larger, each node can have more children. The average number of hops between a node in the network and the root decreases as D increases since the total number of nodes is fixed in this simulation. Therefore, the query latency decreases when D increases, as shown in Figure 6 (a). With a larger D, PCX performs better as the average number of hops between a node in the network and the root decreases, because a request needs to travel fewer number of hops when a cache miss occurs. However, still has much lower cost than PCX and, even when D is as large as ten. 5) Effects of Zipf Parameter θ: The Zipf parameter θ determines how the queries are distributed among the nodes. Small θ means that the query distribution is close to uniform. Large θ means that more queries are generated by fewer hot nodes, i.e., there are hot query spots in the peer-to-peer network. Figure 7 (a) shows that has a very low query latency. As shown in Figure 7 (b), the query cost of is much smaller than that of PCX as θ increases, because can deliver the update to hot spots with very low overhead. However, to push the index to interested nodes, relies on many intermediate nodes. Since these nodes are less likely to access the index when θ is large, does not perform well. 6) Effects of Pareto Arrival: In the previous simulations results, the query inter-arrival time follows an exponential distribution. In this section, we study the performance under the Pareto distribution, as some peer-topeer networks exhibit such query arrival pattern. Pareto distributionhas a parameterα that determines the burstyness of the queries. When α is small, the queries are more bursty, i.e., more queries arrive in a short time interval while there are longer idle time between such bursts. Two α values, 1.05 and1.20 areusedto studytheeffects of the burstyness of query arrivals. As shown in Figure 8, in both cases, performs much better than. Generally speaking, all schemes perform better when α is 1.05, which means that the query burstyness improves the system performance. The reason is that more queries are generated in short intervals and the cached index can be used more times before it expires. It is interesting to see when λ>30 queries per sec-

10 ond, the relative query cost of (α =1.05) and (α =1.05) compared to PCX increases slightly. The reason is that when the query rate is high and the query is bursty, a node may be considered an interested node during the bursty time and becomes an uninterested node during the idle time. This affects the performance of the update propagation because some pushed updates are wasted. However, this effect is not significant and even with this effect, and still perform much better than PCX. V. RELATED WORK The idea of caching indices to reduce query latency and network traffic has been studied in many researches [2], [3], [4], [9], [10], [18], [19]. In CAN [2], nodes cache the recently accessed indices to serve the queries from themselves or passing-by requests. Sripanidkulchai [18] and Markatos [10] proposed two similar schemes for Gnutella that cache the results of data queries (indices used in Gnutella) to reduce the query latency. These researches focus on caching the indices passively, but facing problems of cache consistency. Although TTL can be used for cache consistency, it increases the query latency when TTL expires. The scheme proposed by Roussopoulos and Baker addresses this issues by using update propagation [11]. However, our performance analysis and simulation results show that even though performs much better than pure index caching schemes, its performance is still limited. adopts the idea of application-level multicast [16], [20]. In Bayeux [20], each node joins a multicast group by sending a request all the way to the root, which then sends back the confirmation message to the requester through the underlying routing protocols. The root and all other nodes in Bayeux need to maintain the list of all their descendant nodes and process their descendants join and leave requests. is more scalable than Bayeux because each node only needs to maintain the information of its direct children in the tree. SCRIBE [16] creates a multicast tree similar to the index search tree. The join and leave requests of a node are handled locally by its parent in the multicast tree. However, SCRIBE propagates data through the multicast tree similar to. That is, intermediate nodes have to forward the data hop-by-hop to the subscriber. In, intermediate nodes can be skipped to provide better performance. VI. CONCLUSIONS Index update propagation in peer-to-peer networks can significantly reduce the query latency and the query cost. In this paper, an update propagation scheme called has been proposed. builds a dynamic update propagation tree on top of the existing index searching structure with very low cost. Unlike the update propagation structure in some existing scheme, the tree only involves the nodes that are essential for update propagation. By pushing updates along the tree, both the query cost and the query latency can be reduced. Extensive simulation results showed that the proposed scheme performs much better than existing update caching and propagation schemes. provides a low cost platform to propagate index updates in peer-to-peer networks. The idea of may be applied to more general data dissemination scenarios. We plan to extend to a general data dissemination platform in overlay networks. REFERENCES [1] Napster, [2] S. Ratnasamy, P. Francis, M. Handley, R. Karp, and S. Shenker, A scalable content-addressable network, ACM SIGCOMM, [3] I. Stoica, R. Morris, D. Karger, M. F. Kaashoek, and H. Balakrishnan, Chord: A scalable peer-to-peer lookup service for internet applications, ACM SIGCOMM, [4] B. Y. Zhao, J. Kubiatowicz, and A. Joseph, Tapestry: An infrastructure for fault-tolerant wide-area location and routing, Technical Report UCB/CSD , University of California at Berkeley, Computer Science Department, [5] J. Xu, On the fundamental tradeoffs between routing table size and network diameter in peer-to-peer networks, IEEE Infocom, [6] Gnutella, [7] I. Clarke, O. Sandberg, B. Wiley, and T.W. Hong, Freenet: A distributed anonymous information storage and retrieval system in designing privacy enhancing technologies, LNCS 2009, [8] E. Cohen and S. Shenker, Replication strategies in unstructured peer-to-peer networks, ACM SIGCOMM, [9] Q. Lv, P. Cao, E. Cohen, K. Li, and S. Shenker, Search and replication in unstructured peer-to-peer networks, 16th ACM International Conference on Supercomputing, [10] E. P. Markatos, Tracing a large-scale peer to peer system: an hour in the life of gnutella, 2nd IEEE/ACM International Symposium on Cluster Computing and the Grid, [11] M. Roussopoulos and M. Baker, : Controlled update propagation in peer-to-peer networks, Proceedings of the 2003 USENIX Annual Technical Conference, [12] J. Gwertzman and M. Seltzer, World-Wide Web Cache Consistency, USENIX 1996 Annual Technical Conf., Jan [13] G. Cao, A Scalable Low-Latency Cache Invalidation Strategy for Mobile Environments, IEEE Transactions on Knowledge and Data Engineering, vol. 15, no. 5, September/October 2003 (A preliminary version appeared in ACM MobiCom 00). [14] O. Wolfson, S. Jajodia, and Y. Huang, An Adaptive Data Replication Algorithm, ACM Transactions on Database Systems, vol. 22, no. 2, pp , [15] P. Cao and C. Liu, Maintaining Strong Cache Consistency in the World-Wide Web, IEEE Transactions on Computers, pp , April [16] A. Rowstron, Anne-Marie Kermarrec, M. Castro, and P. Druschel, SCRIBE: The design of a large-scale event notification infrastructure, Networked Group Communication, pp , [17] S. Saroiu, P. K. Gummadi and S. D. Gribble, A measurement study of peer-to-peer file sharing systems, Proceedings of Multimedia Computing and Networking (MMCN), January [18] K. Sripanidkulchai, The popularity of gnutella queries and its implications on scalability, 2.cs.cmu.edu/ kunwadee/research/p2p/gnutella.html, [19] L. Xiao, X. Zhang, and Z. Xu, On reliable and scalable peer-topeer Web document sharing, Proceedings of 2002 International Parallel and Distributed Processing Symposium, [20] S. Q. Zhuang, B. Y. Zhao, A. D. Joseph, R. H. Katz, and J. Kubiatowicz, Bayeux: An architecture for scalable and fault-tolerant widearea data dissemination, Proc. of the Eleventh International Workshop on Network and Operating System Support for Digital Audio and Video (NOSSDAV 2001), June 2001.

A Reputation Management System in Structured Peer-to-Peer Networks So Young Lee, O-Hoon Kwon, Jong Kim and Sung Je Hong Dept. of Computer Science & Engineering, Pohang University of Science and Technology

Are Virtualized Overlay Networks Too Much of a Good Thing? Pete Keleher, Bobby Bhattacharjee, Bujor Silaghi Department of Computer Science University of Maryland, College Park keleher@cs.umd.edu 1 Introduction

Anonymous Communication in Peer-to-Peer Networks for Providing more Privacy and Security Ehsan Saboori and Shahriar Mohammadi Abstract One of the most important issues in peer-to-peer networks is anonymity.

Trace Driven Analysis of the Long Term Evolution of Gnutella Peer-to-Peer Traffic William Acosta and Surendar Chandra University of Notre Dame, Notre Dame IN, 46556, USA {wacosta,surendar}@cse.nd.edu Abstract.

Locality Based Protocol for MultiWriter Replication systems Lei Gao Department of Computer Science The University of Texas at Austin lgao@cs.utexas.edu One of the challenging problems in building replication

Department of Computer Science Institute for System Architecture, Chair for Computer Networks File Sharing What is file sharing? File sharing is the practice of making files available for other users to

Stability of QOS Avinash Varadarajan, Subhransu Maji {avinash,smaji}@cs.berkeley.edu Abstract Given a choice between two services, rest of the things being equal, it is natural to prefer the one with more

Volume 2, Issue 9, September 2012 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com An Experimental