Oracle Blog

Blog for melvinkoh

Friday Jun 27, 2008

Looks like more and more applications are moving towards nVidia GPUs and using the CUDA SDK. SciFinance from SciComp is a code synthesis technology for building derivatives pricing and risk models. Just by changing certain keyword, it can generate CUDA-enabled codes that are, according to the website, 30X-80X faster than the serial codes. Also, check out the CUDA-zone website which shows many successes of accelerating the performance of apps using CUDA and nVidia's GPU.

Tuesday Jun 03, 2008

I came across a nice and simple tutorial for programmers who want a crash course in MPI, I think this is a good tutorial to start. It does not go into detail about message passing programming paradigm, but it gives step by step procedures of building the GCC, setting up the env vars, installing openmpi... on a Linux machine. Warning though, some knowledge on Linux/Unix required. Once you get everything up and running, the tutorial walks you through several examples with increasing level of complexity. The last example is matrix multiplication where it teaches how to parallelize the code using MPI.

Wednesday May 28, 2008

During the recent JavaOne, Sun announced a new project call Project Hydrazine that allows for the rapid creation and deployment of hosted services across multiple device types. Looking at this diagram suggests that this new service will be deployed on the existing compute infrastructure of Network.com. I guess that it will be used by Sun to offer hosting services, something akin to Network.com's Compute Utility except that it'll not just be for compute.

Some people (here and here) have referred it as Sun's new Cloud Computing platform. While the definition of Cloud Computing is subjected to much debate, I'm still very excited about it. However, I'm very curious on how does Project Caroline fit into all this, assuming there is any relation at all?

Monday May 05, 2008

Apache Hadoop is gaining a lot of attention in the web community, especially support from Yahoo. It has a distributed filesystem and supports data intensive distributed application using the MapReduce computational model. It is been viewed as an important piece of the puzzle in Cloud computing, but can also be very useful to datamining type of applications. I think it won't be long before it catches attention in HPC, if it hasn't yet. With it's high scalability and fault tolerant nature, I think it has a lot of uses in HPC. Due to the data intensive nature, I wonder if there can be any value with using Hadoop with Lustre. If anyone has any insight to the I/O characteristics, I'll be glad to hear about it.