Context Navigation

Wireless Cloud Computing

Project Objective:

To provide cloud services on locally available internet-enabled devices.

Motivation:

The need for this idea is in the regions where internet bandwidth is extremely slow. With low bandwidth, fetching data from a distant server is not efficient. One solution could be to install a Data center. But Installing a data center is neither easy, nor cheap. Hence, our solution can be of help. The cloud can be hosted on locally available laptops and smart-phones. One motivation behind this idea is that, computational and communication capabilities of laptops and smart-phones are increasing day-by-day. It can be put to better use. This is what our project aims at.

Setup Diagram:

Experimental Methodology:

There are multiple stages that this project takes.

Start with basic: We started with installation of OpenStack? cloud software on Orbit machines. Then we ran map-reduce applications on them using Hadoop-An infrastructure for map-reduce applications.

Test the performance of map-reduce: To evaluate the performance of map-reduce on low bandwidth networks, we decided to initially remove the complexities of OpenStack?. We deployed Hadoop on Orbit nodes.

The stages of performance tests are as follows:

Number of servers varied from 1-15

Ethernet Bandwidth varied from 1 GBPS to 1 MBPS

The application that we used was Terasort. We sorted 108 bytes of data on Hadoop. The results are as follows:

Results:

The growth of the execution time is as follows:

Conclusions:

From the above graph, we can conclude the following:

As the bandwidth decreases, the performance degrades. This happens because of the inherent properties of the Terasort application. During terasort application, the Shuffle phase requires a lot of data transfer among the servers. This transfer is seriously affected due to the less bandwidth. Hence we can see increase in the total execution time.

As the number of servers increases, performance improves. This is because, as the number of computation servers is now more, the amount of task that every server gets, is reduced. Hence, every server performs faster. All the servers work in parallel. Hence the gain observed is tremendous.

We can see that, at 1 MBPS, if we have 10 servers working in parallel, we can gain in overall performance over single server. Hence, what we lose in bandwidth, can we gained with computation power by using multiple servers. This conclusion serves as a ground for our future work. This means that, it is possible to deploy cloud in low bandwidth remote areas and still gain appreciable speedup.