Prerequisites:

Project Content

Graphs have always played an important role in computer science, e.g. for modeling relationships, processes, networks etc.
In times of Web 2.0, Semantic Web and Social Networks new challenges arise due to the rapidly growing size of such graph structures that necessitate distributed storage and processing strategies.
In recent years, MapReduce has become the de facto standard for distributed, parallel processing of large-scale data and paved the way for a rich ecosystem of open-sourced data processing frameworks.
Cloud services like Amazon's Elastic Compute Cloud (EC2) enable also small and medium sized companies without an own infrastructure to evaluate their big data with such frameworks by providing resources dynamically as needed.
In this project, we will use Apache Hadoop (more precisely the Cloudera distribution of Hadoop), one of the most popular open-source Big Data frameworks.
The participants will develop and implement an application on top of Hadoop for large scale graph processing/analysis. Prior knowledge of Hadoop/MapReduce is desirable but not required.
For participants who do not have prior knowledge of Hadoop/MapReduce, there will be an initial introduction phase to familiarize themselves with the basics of MapReduce by solving a mandatory exercise sheet.
However, you should have prior knowledge in Java programming (as well as using an IDE) and be willing to familiarize yourself with the principles of distributed processing of large-scale data.

More information about topics will be presented in the introductory meeting. In addition, the participants are also invited to suggest threir own ideas.