Bottom Line:
Usually, certain sensory data may not vary significantly within a certain time duration for certain applications.Leveraging these cooperatively cached sensory data, queries are answered through composing these two-tier cached data.Experimental evaluation shows that this approach can reduce the network communication cost significantly and increase the network capability.

ABSTRACTWireless sensor networks, serving as an important interface between physical environments and computational systems, have been used extensively for supporting domain applications, where multiple-attribute sensory data are queried from the network continuously and periodically. Usually, certain sensory data may not vary significantly within a certain time duration for certain applications. In this setting, sensory data gathered at a certain time slot can be used for answering concurrent queries and may be reused for answering the forthcoming queries when the variation of these data is within a certain threshold. To address this challenge, a popularity-based cooperative caching mechanism is proposed in this article, where the popularity of sensory data is calculated according to the queries issued in recent time slots. This popularity reflects the possibility that sensory data are interested in the forthcoming queries. Generally, sensory data with the highest popularity are cached at the sink node, while sensory data that may not be interested in the forthcoming queries are cached in the head nodes of divided grid cells. Leveraging these cooperatively cached sensory data, queries are answered through composing these two-tier cached data. Experimental evaluation shows that this approach can reduce the network communication cost significantly and increase the network capability.

f5-sensors-15-15033: Comparison of the accumulated energy consumption for MAQWR, where the cache size of the SN is set to 500, 600, 700, 800 and 900, respectively. The gradient of the curves represents the ratio of energy consumption for answering the query. This figure shows that when the cache size of the SN is large enough to cache all sensory data requested by the query, the energy consumption is relatively small and steady. Otherwise, the energy consumption is much more and increases significantly. For instance, in comparison with the case when the cache size is set to 800, 66% (or 40%) more energy is consumed for the case when the cache size is set to 500 (or 600).

Mentions:
As previously mentioned, the index tree to be constructed in this article should be a relatively balanced tree for a skewed distribution, rather than an unbalanced tree, as developed in our previous work [33]. The difference lies mainly in the tree node merging strategy As presented in Algorithm 1, after dividing the network region into square grid cells whose side-length is , we firstly get the set of leaf nodes (denoted TNset) with inverted files as presented by Algorithm 1 (Lines 1–13) in our previous work [33] (Line 1). Note that these leaf nodes corresponds to grid cells, rather than sensor nodes in the network. The weight (denoted wgt(nd1, nd2)) between neighboring nodes (denoted nd1 and nd2), which reflects the energy consumption when routing sensory data from one node to another, is calculated using the function calNgbNdWgt(nd1, nd2). As presented by Algorithm 2 in our previous work [33] (Line 4), this function returns the sum of weights among all pairs of neighboring grid cells in the corresponding neighboring sub-regions. If there are neighboring nodes (i.e., nd1 and nd2) in TNset that can be merged, in other words, wgt(nd1, nd2) is the biggest (Line 5), nd1 and nd2 are merged as a newly-generated node tn (Lines 6–7), and the inverted files (denoted tn.IvtF, for instance) are processed accordingly (Line 8). Note that tn corresponds to a sub-region in the network, which is the merger of sub-regions for nd1 and nd2. After this merging procedure, nd1 and nd2 are removed from TNset (Line 9), while tn is inserted into nTNset as a candidate for the next merging procedure (Line 10). This procedure iterates until all neighboring nodes in TNset have been examined and processed. Thereafter, TNset and nTNset are updated accordingly (Lines 12–13). The symbol ∅ in Line 13 means an empty set. This tree construction procedure terminates one there is only one node left in TNset (Line 3), which corresponds to the root node of the index tree (Line 15). An example of the index tree constructed through this algorithm is shown in Figure 3, which is a binary tree and more balanced than the tree as shown in Figure 5 in our previous work [33]. Specifically, 25 grid cells, as shown in Figure 2, correspond to the leaf nodes, and sub-regions composed of multiple grid cells correspond to non-leaf nodes, in the index tree. This index tree construction is to facilitate the popularity-based cooperative caching mechanism to be presented in the following sections. The time complexity of Algorithm 1 is O(n × log2n), where n is the number of leaf nodes in, while log2n is the height of the index tree.

f5-sensors-15-15033: Comparison of the accumulated energy consumption for MAQWR, where the cache size of the SN is set to 500, 600, 700, 800 and 900, respectively. The gradient of the curves represents the ratio of energy consumption for answering the query. This figure shows that when the cache size of the SN is large enough to cache all sensory data requested by the query, the energy consumption is relatively small and steady. Otherwise, the energy consumption is much more and increases significantly. For instance, in comparison with the case when the cache size is set to 800, 66% (or 40%) more energy is consumed for the case when the cache size is set to 500 (or 600).

Mentions:
As previously mentioned, the index tree to be constructed in this article should be a relatively balanced tree for a skewed distribution, rather than an unbalanced tree, as developed in our previous work [33]. The difference lies mainly in the tree node merging strategy As presented in Algorithm 1, after dividing the network region into square grid cells whose side-length is , we firstly get the set of leaf nodes (denoted TNset) with inverted files as presented by Algorithm 1 (Lines 1–13) in our previous work [33] (Line 1). Note that these leaf nodes corresponds to grid cells, rather than sensor nodes in the network. The weight (denoted wgt(nd1, nd2)) between neighboring nodes (denoted nd1 and nd2), which reflects the energy consumption when routing sensory data from one node to another, is calculated using the function calNgbNdWgt(nd1, nd2). As presented by Algorithm 2 in our previous work [33] (Line 4), this function returns the sum of weights among all pairs of neighboring grid cells in the corresponding neighboring sub-regions. If there are neighboring nodes (i.e., nd1 and nd2) in TNset that can be merged, in other words, wgt(nd1, nd2) is the biggest (Line 5), nd1 and nd2 are merged as a newly-generated node tn (Lines 6–7), and the inverted files (denoted tn.IvtF, for instance) are processed accordingly (Line 8). Note that tn corresponds to a sub-region in the network, which is the merger of sub-regions for nd1 and nd2. After this merging procedure, nd1 and nd2 are removed from TNset (Line 9), while tn is inserted into nTNset as a candidate for the next merging procedure (Line 10). This procedure iterates until all neighboring nodes in TNset have been examined and processed. Thereafter, TNset and nTNset are updated accordingly (Lines 12–13). The symbol ∅ in Line 13 means an empty set. This tree construction procedure terminates one there is only one node left in TNset (Line 3), which corresponds to the root node of the index tree (Line 15). An example of the index tree constructed through this algorithm is shown in Figure 3, which is a binary tree and more balanced than the tree as shown in Figure 5 in our previous work [33]. Specifically, 25 grid cells, as shown in Figure 2, correspond to the leaf nodes, and sub-regions composed of multiple grid cells correspond to non-leaf nodes, in the index tree. This index tree construction is to facilitate the popularity-based cooperative caching mechanism to be presented in the following sections. The time complexity of Algorithm 1 is O(n × log2n), where n is the number of leaf nodes in, while log2n is the height of the index tree.

Bottom Line:
Usually, certain sensory data may not vary significantly within a certain time duration for certain applications.Leveraging these cooperatively cached sensory data, queries are answered through composing these two-tier cached data.Experimental evaluation shows that this approach can reduce the network communication cost significantly and increase the network capability.

ABSTRACTWireless sensor networks, serving as an important interface between physical environments and computational systems, have been used extensively for supporting domain applications, where multiple-attribute sensory data are queried from the network continuously and periodically. Usually, certain sensory data may not vary significantly within a certain time duration for certain applications. In this setting, sensory data gathered at a certain time slot can be used for answering concurrent queries and may be reused for answering the forthcoming queries when the variation of these data is within a certain threshold. To address this challenge, a popularity-based cooperative caching mechanism is proposed in this article, where the popularity of sensory data is calculated according to the queries issued in recent time slots. This popularity reflects the possibility that sensory data are interested in the forthcoming queries. Generally, sensory data with the highest popularity are cached at the sink node, while sensory data that may not be interested in the forthcoming queries are cached in the head nodes of divided grid cells. Leveraging these cooperatively cached sensory data, queries are answered through composing these two-tier cached data. Experimental evaluation shows that this approach can reduce the network communication cost significantly and increase the network capability.