Why Shadowfax? Our cluster is so fast that we named it after the Lord Of The Rings character!

Website once known as http://CalcPage.tripod.com (1988 – 2008)

Sidebars are my own work.

Saturday, November 26, 2011

CIS(theta) 2011-2012 - Scaling the Cluster! - Meeting VI

The following is a summary of what we've accomplished so far with the 2011-2012 CIS(theta) team. Our new Shadowfax Cluster is coming along quite well. We have a nice base OS in the 64bit Ubuntu 11.04 Natty Narwhal Desktop on top of our AMD dualcore Athlons and gigE LAN. The Unity Desktop isn't that different from the Gnome Desktop we've been using these past few years on Fedora and Ubuntu. Natty is proving very user friendly and easy to maintain! This week we installed openMPI and we used flops.f to scale our cluster to 14 cores! Remember, we needed openSSH public keys so openMPI can scatter/gather cluster jobs without the overhead of logging into each node as needed. We created a user common to all cluster nodes called "jobs" in memory of Steve Jobs so the cluster user can simply log into one node and be logged into all nodes at once (you can actually ssh into each node as "jobs" without specifying a userid or passwd)!

InstantCluster Step 1: Infrastructure

Make sure your cores have enough ventilation. The room has to have powerful air conditioning too. These two factors may seem trivial but will become crucial when running the entire cluster for extended periods of time! Also, you need to have enough electrical power, preferably with the cabling out of the way, to run all cores simultaneously. Don't forget to do the same with all your Ethernet cabling. We have CAT6E cables to support our gigE Ethernet cards and switches. We are lucky that this step was taken care of for us already!

InstantCluster Step 2: Hardware

You need up to date Ethernet switches plus Ethernet cards and cores as well as plenty of RAM in each Linux box. As stated above, our gigE LAN and switches were already setup for us. Also, we have 64bit dual-core AMD Athlons and our HP boxes have 2GB of RAM. We are still waiting for our quad-core AMD Phenom upgrade!

InstantCluster Step 3: Firmware

We wasted way too much time two years ago (2009-2010 CIS(theta)) trying out all kinds of Linux distros looking for a good 64bit base for our cluster. Last year (2010-2011 CIS(theta)) we spent way too much time testing out different liveCD distros. Last year, we also downgraded from 64bit Ubuntu 10.04 Desktop edition to the 32bit version on our Linux partitions. 64bit gives us access to more RAM and a larger maxint, but was proving to be a pain to maintain. Just to name one problem, jre and flash were hard to install and update on FireFox. Two years ago, we tried Fedora, Rocks, Oscar, CentOS, Scientific Linux and, finally, Ubuntu. We've done this several times over the years using everything from Slakware and KNOPPIX to Fedora and Ubuntu! This year, 64bit Ubuntu has proven very easy to use and maintain, so I think we'll stick with it for the cluster!

InstantCluster Step 4: Software Stack I

On top of Ubuntu we need to add openSSH, public-key authentication (step 4) and openMPI (step 5). Then we have to scale the cluster (step 6). In steps 7-10, we can discuss several applications to scatter/gather over the cluster whether it be graphical (fractals, povray, blender, openGL, animations) or number crunching (C++ or python app for Mersenne Primes or Beal's Conjecture). So, what follows is a summary of what we did to get up to plublic-key authentication. This summary is based on the http://cs.calvin.edu/curriculum/cs/374/MPI/ link listed below. First, we installed openSSH-server from http://packages.ubuntu.com, then:

If you have no .ssh directory in your home directory, ssh to some other machine in the lab; then Ctrl-d to close the connection, creating .ssh and some related files.

From your home directory, make .ssh secure by entering:

chmod 700 .ssh

Next, make .ssh your working directory by entering:

cd .ssh

To list/view the contents of the directory, enter:

ls -a [we used ls -l]

To generate your public and private keys, enter:

ssh-keygen -t rsa

The first prompt is for the name of the file in which your private key will be stored; press Enter to accept the default name (id_rsa).The next two prompts are for the password you want, and since we are trying to avoid entering passwords, just press Enter at both prompts, returning you to the system prompt.

To compare the previous output of ls and see what new files have been created, enter:

ls -a [we used ls -l]

You should see id_rsa containing your private key, and id_rsa.pub containing your public key.

To make your public key the only thing needed for you to ssh to a different machine, enter:

cat id_rsa.pub >> authorized_keys

[The Linux boxes on our LAN, soon to be cluster, have IPs ranging from 10.5.129.1 to

10.5.129.24 So, we copied each id_rsa.pub file to temp01-temp24 and uploaded these

files via ssh to the teacher station. Then we just ran cat tempnn >> authorized_keys

for each temp file to generate one master authorized_keys file for all nodes that we could

just download to each node's .ssh dir.]

[optional] To make it so that only you can read or write the file containing your private key, enter:

chmod 600 id_rsa [optional] To make it so that only you can read or write the file containing your authorized keys, enter: chmod 600 authorized_keys

InstantCluster Step 5: Software Stack II

We then installed openMPI (we had a lot less dependencies this year with Natty 11.04 64bit) and tested multi-core with flops. Testing the cluster as a whole will have to wait until the next meeting when we scale the cluster! We followed openMPI install instructions for Ubuntu from http://www.cs.ucsb.edu/~hnielsen/cs140/openmpi-install.html

These instructions say to use sudo and run run apt-get install openmpi-bin openmpi-doc libopenmpi-dev However, the way our firewall is setup at school, I can never update my apt-get sources files properly. So, I used http://packages.ubunutu.com and installed openmpi-bin, gfortran and libopenmpi-dev. That's it!

Then we used the following FORTRAN code to test multi-core. FORTRAN, really? I haven't used FORTRAN77 since 1979! ...believe it or don't!

We compiled flops.f on the Master Node (any node can be a master):

mpif77 -o flops flops.f

and tested openmpi and got just under 800 MFLOPS using 2 cores (one PC):

mpirun -np 2 flops

Next, we generated a "machines" file to tell mpirun where all the nodes (Master and Workers) are (2 PCs or nodes with 2 cores each for example):

mpirun -np 4 --hostfile machines flops

Every node has the same "machines" text file in /home/jobs listing all the IPs, one per line. Every node has the same "flops" executable file (or whatever your executable will be) in /home/jobs. Every node has the same "authorized_keys" text file with all 25 keys in /home/jobs/.ssh

Note: last year we got about 900 MFLOPS per node. This year we still have 64bit AMD athlon dualcore processors. However, these are new PCs, so these athlons have slightly different specs. Also, last year we were running Maverick 10.04 32bit ... and ... these new PCs were supposed to be quadcores! We are still awaiting shipment.

InstantCluster Step 6: Scaling the cluster

Including myself, we only had 3 members attending this week. So, we added 3 new nodes. We had nodes 21-24 working well last time. Now we have nodes 19-25 for a total of 7 nodes, 14 cores and over 5 GFLOPS! We streamlined the process:

OLD SITES

OLD STATCOUNTER

OLD CALCPAGE MISSION STATEMENT

The Calculus & Computer Science Archive ProjectOnline Since 1988

This site is here to help new and experienced teachers alike. Over the years I have developed many materials for several courses that I teach regularly. I have taught AP Computer Science since 1987, College Math since Fall 1993 and AP Calculus since 1994.

These materials have been edited and re-edited for ease of use in the classroom every time I have a chance to teach these courses. I have had many requests to share my materials at the conferences I frequent, whence this website.