If you can't beat them, join them! In other words, I think the consensus from our last meeting is that we are done with liveCD Linux distro solutions for HPC. What's to stop us from installing openMPI on our Linux partitions? So, we will have to install openSSH and openMPI directly on our Linux partitions to set up a more permanent cluster.

Fret not, all is not lost as we can learn from our experience this year. Cluster By Night should serve as a proof of concept. Namely, we CAN run openMPI over openSSH! pelicanHPC worked fine with a crossover Ethernet cable, but I am done with PXE boot! However, we can emulate all the number crunching apps that pelicanHPC has by running Octave with MPITB and openMPI. Let's make sure not repeat the mistakes from last year trying out a million distros on our Linux partitions. Let's stick to our 32bit Unbuntu 10.10 Desktop. We should take stock of where we've been.

CIS(theta) (2009-2010) got bogged down finding a stable 64bit Linux distro to use on the Linux partitions of our dual boot PCs. We used 32bit and 64bit Fedora 11 and 12. We tried centOS, Scientific Linux, OSCAR and Rocks! We got a torque server working for openMPI and helloMPI.c but didn't get much farther than that, I'm afraid.

CIS(theta) (2010-2011) has to switch gears and take the best of all the above. Many in the HPC community talk about a software stack. So, let's come up with our own! I think we can run our application (C++ fractal program we design, povray, blender or openGL) on top of openMPI on top of openSSH (with public key authentication) on top of Ubuntu. What about the hardware stack (64bit dual-core AMD athlons on top of a gigE switch)? I think we can make a stack like this work, what do you think?

BTW, openGL sounds interesting. Take a look at the links above from Thomas Jefferson High School. They've been running a clustering course for quite some time. In fact, they got into it when they won some sort of computing competition on the late 1980s and Cray donated a super computer to their school! More recently they've been playing with openMosix and openMPI as well as fractals and povray just like us. They have a lot of notes on openGL too! Also, if you want a good overview of things MPI, take a look at the ualberta link, it's a very good overview in ppt style even though its a pdf!

Then we should update our Poor Man's Cluster notes to Poor Man's Cluster 2.0 notes! Step1 can be about the physical plant including electrical power, ventilation and AC. Step2 can be about the hardware stack including our gigE LAN and athlon boxes. Step3 can be about the firmware stack including Ubuntu, openSSH and setting up public keys. Step4 can be about the software stack whether it be openMPI+MPITB+Octave or MPI4py or povray or blender or openGL. Step5 can be about various applications and results.

OLD SITES

OLD STATCOUNTER

OLD CALCPAGE MISSION STATEMENT

The Calculus & Computer Science Archive ProjectOnline Since 1988

This site is here to help new and experienced teachers alike. Over the years I have developed many materials for several courses that I teach regularly. I have taught AP Computer Science since 1987, College Math since Fall 1993 and AP Calculus since 1994.

These materials have been edited and re-edited for ease of use in the classroom every time I have a chance to teach these courses. I have had many requests to share my materials at the conferences I frequent, whence this website.