Cloud computing has become a standard in almost all the applications on the internet whether its a website or some application on the new powerful mobile devices. Whether we see cloud as a means of load balancing for servers or for people using it for data centralization, it has really changed the way we use computers and mobile devices. But there is still more to come, personal clouds.No longer will we need service providers giving services for a hefty fee or a limited resources.

Soon there will be very simple cloud infrastructure which will make regular applications work on the internet itself and on the local devices as well. Data will be available on the local machine, applications will work seamlessly on any device in the internet and still provide same functionality. In this scenario cheaper and smaller devices will play a much more important rule. Instead of being unnecessarily, they will be way smarter. They will also give phenomenally longer battery life.Once connected to the main server they will communicate with minimum overhead and provide the same functionality as a regular computer. Thus computation requirements will be reduced to almost zero and all they would be required to do is to draw an application interface in a browser kind of environment. The extra power will be used to communicate with other devices to work with each other in an open environment thus making a perfect mesh.

Such an Environment will make our communities smarter in real sense. No longer will be need big data centers always burning resources. We will save a lot of power and make our cities greener. Computers will pool resources and work in close coordination and work on problems like drug research, prediction of natural disasters and other tasks that require a lot of computing power. Imagine a world where we will be utilizing hundred percent computational power of the systems and will no longer need unnecessarily very powerful computers with too many cores or mobile devices for that matter.

Security is a major concern with such a system. We will have to think beyond encryption and data abstraction and make smarter systems, systems that will treat information as data and not just binary signals. Systems will be able to sense data and accordingly decide what action to take. Also these systems will have actual learning and training procedures. Computers will be able to communicate with us and keep us updated. Such systems will be able to monitor environment and work proactively.

In a near future we are going to witness a totally new breed of computers powered by a smart cloud. Intelligent systems connected with a smart network will power our lives and eradicate a lot of problems in the society. No longer will there be uneven distribution or wastage of resources.

We the NeweraHPC team are working diligently to convert these dreams into reality. This is a community effort and we would like you to join the effort. For more details log on to

Red-Black Trees For Dynamic Data
Structures

Red-black trees or commonly known as rbtree can be
very useful in making dynamic data structures which can be used for a
variety of purposes from string based hash tables to array like
linked list. The Linux kernel extensively uses rbtree for managing
various sort of data elements where inputs are dynamic and huge.

Tree like data structures have been used for long as
they provide better search efficiency as compared to other data
structures like linked list and arrays. But there is a fundamental
problem with the tree like structures that is when the input are
progressive for example numeric keys 1, 3, 4, 9, 100 and so on, the
tree tends to align in one direction thus reducing the benefit of
tree like data structure itself. This problem can be easily solved by
using rbtree which balances itself dynamically. For instance if the
data is being added and it is no longer the center of the node, it
will change the central node so that the number of children are
balanced.

Illustration 1: Tree - Generated from http://people.ksp.sk/~kuko/bak/

As shown in the illustration 1 and 2, tree generated
by rbtree is better and efficient.

Coming back to the implementation, there are a lot
of interesting things we can do with the rbtree. One of the basic and
the most useful complex data structure can be an array wherein any
new element that is added is numbered serially thus we get
flexibility of adding or removing elements easily without having to
work on a fixed range. For example when we add first element its
index automatically becomes 1, the consecutive element is numbered 2
and so on. Whenever an element is to removed all the elements with
higher indexes can be renumbered thus reordering the complex array.

Secondly, keys can be based on the strings
themselves thus we can implement hash tables based on strings wherein
a group of elements can be inserted with a string identifier.

Element can be searched by object.search(paramaters)

*Parameters depend upon the mode of the rbtree.

Zero Configuration

High Performance and High Throughput
Cluster

HPC(High Performance
Clusters) and HTC(High Throughput Cluster) have been used for a long
time for scientific research and commercial activities. Since the
beginning these clusters have relied on heavy configuration both on
the server as well as nodes.

We can design new hybrid
clusters which will work on the existing models where in the nodes
connected through fast interconnects in the local network or over the
internet can contribute their computing resources. Also this cluster
will have a dedicated mode in which the computers will be connected
to each other and dedicated solely to the cluster. The cluster will
have one remote boot server based on the LTSP(Linux Terminal Server
Project). The remote server will be responsible for pushing right
operating system to the node according its configuration. Thus
tailored operating systems will be run on the node computers
optimized to deliver full performance. The remote server will also
have all the configurations required by the node.

This cluster system will
also have a dynamic distributed file system. Any machine booting on
the cluster with permanent storage will automatically become a part
of the distributed file system. There will be a central file server
which will be responsible for managing data distribution and backup.
This server will decide where the data is to be stored and also push
a copy of the data to the backup server. When a node boots in the
cluster its storage will be reinitialized and the data will be shared
from other nodes or from the backup server which will send it the
data of a node that recently went down. New node will read and write
data on the distributed file system and while going down notify the
central server. The central server will distribute the data that was
available on this node to the other nodes on the server and the
cluster will resume normal operations.

Maintenance on this type of
a cluster will be very easy as the nodes can be added or removed on
the go and their is no special configuration required for adding new
nodes or expanding the storage of the cluster on a whole.

I am still working on the
project and have completed the LTSP integration. I would like to have
your support. This is an open source GPL v2 program.

NeweraHPC(http://newerahpc.googlecode.com) is a simple scheduler for scheduling tasks on a cluster of heterogeneous clusters. The program is completely opensource written under the GPLv2 license.

With the advancement of processors there is not much need of single system image and similar clusters. But the job can be done very efficiently using a simple scheduler that can fragment a job into smaller pieces and distribute it over the cluster of computers and then collect the results back. NeweraHPC does the same.

It is a very simple library based on C++ and uses posix libraries thus compatible with all the available UNIX platforms. It has all the basic services required like a simple garbage collector and can handle memory fragmentations very well.

There are two ways jobs can be distributed, first method is the traditional way wherein we can use some external application to process data and the data is fed in the form of independent files which are then written to a standard file which is then sent back to node which requested the job.

The second method is called NXI(Newera Extensible Interface) which is in the form of dynamic link libraries and contains docking functions. These functions are given data which is then processed and returned back to the node. This method is very efficient and embarrassingly parallel application frameworks can be written through it.

There are two versions of the program available in the repository, version 2 and 3.

Version 2 program is the older one and has the NXI implemented in it. It has been depreciated and it will be ported soon to the version 3. There is a test suite available with the program which can calculate the pi value to a million decimal places.

Version 3 contains the external binary implementation and is more sophisticated and will soon have the ability to work with NXI.