DDN has those enclosures too, I think at SC11 they said they'd be available second half of 2012.

edit: looking at DDN's site, theirs is 4U 84 drives. And maybe only for the SFA12K series.

I talked to a client recently that was using DDN boxes. They are using a simlar form factor as Dell/Equallogic/Xyratex for their higher density 3.5 disk boxes. Bummer part with DDN is their somewhat limited hardware/os support.

Yeah, I'm going. Got my registration and travel plans put together fairly late, since funding was uncertain and I've been on the road a lot lately. A few workshops related to my intended thesis, though, so I took the plunge once money came through.

Yeah, I'm going. Got my registration and travel plans put together fairly late, since funding was uncertain and I've been on the road a lot lately. A few workshops related to my intended thesis, though, so I took the plunge once money came through.

Meet up for drinks or some such while we're there?

Hopefully, given that I'm the engineer from the company, I'm either going to have 0 free time or a bunch of it. Given how SC11 went, I'm betting that lunch would be best.

Hopefully it's as much fun as it's been in previous years. Looks like the Beowulf Bash is still on, but with the DOE massively scaling back their participation (no booths, small fraction of the usual attendance) and the DOD having no official representation, this will be a much thinner event than usual.

Excuse me while I laugh my ass off that we get mentioned in the same breath as Titan. Only about a 500x performance difference, in Titans favor.

I'm assuming it's probably some marketing person who put the article up, and missed the TF vs PF factor, but come on, Appro is a supercomputing solutions provider, someone there should know.

Just in case someone catches on and updates the article "However, Titan may have some competition with another supercomputer being installed by RMIT University, La Trobe University and the Victorian Partnership for Advanced Computing (VPAC) in Australia, Computerworld reported."

Competition, sure, we might just be able to keep up with processing Titans logs

edit: Oh, and for reference, the Computerworld article doesn't mention Titan at all, it mentions a new computer being built in Australia that will be hitting 1.2PF, so almost allowed to sit at the same table as Titan.

OK so who at SC12 doesn't like Ars? dig any arstechnica.com comes back with no results on the fixed network at SC. Everything else works fine. Come on, if you're going to block a site, block slashdot at least ;-)

The DNS over wireless has had severe issues for all sorts of things. In bad areas, I'm getting Google sites dropping out of connectivity as well. The workshop area (255 et al) are particularly bad, but the cafe on the second floor north end is fine.

MilleniX : I think you might have to either give a few more hints about who you are, or come by the Australian HPC booth some time while I'm there. I've met BitPoet and chalex now, need to catch up with you and then maybe we can organise a bit of time to have a drink or two, before my schedule is completely overrun. Thursday or Friday do actually look pretty clear right now though, but I fly out Friday around 7pm.

Stalking your posts a little bit, I saw you mention Charm++, and I know I should know who is working with that :-)

Sorry I missed you guys. My group was involved in a bunch of BoFs and such that needed my help, and a paper deadline during the conference.

Matt - I did ask after you in the Australian HPC booth, but the folks there didn't recognize your name. Different name in IRL?

No, this is definitely my real name. I wonder if you got one of the iVEC folk, I only met them on the booth, some of the Swinburne guys may not have known me that well either.

There were 6 groups represented at the booth, so there was a chance of getting someone who didn't know me, but the VLSCI girls knew me, and they were manning the booth most of the time. Did you get a Tim Tam or a kangaroo at least? :-)

1. We really have a grid as opposed to a cluster and our computing needs might mot really be classified as HPC. We use it for academic finance and marketing research. The most important aspect of our hardware is the speed at which a single-threaded calculation can be run on a CPU so IPC isn't a concern at all while memory bandwidth is fairly important. Even more important is the max amount of memory we can put in a single node without compromising the clock of the memory, as several of the program being run on our grid require the entire data set be in memory. Network is fairly important as well, with bandwidth to the shared via NAS disks being the most important network aspect - we are upgrading from 1000BASE-T to 10G SPF+ twisted pair.

2. We run exclusively Red Hat Enterprise Linux. We have nodes that are RHEL5 and nodes that are RHEL6.

3. Currently we manage each node by hand over ssh (or ILOM on a bad day) - currently there are only 7 nodes, but soon we will be doubling the size and will likely move to ROCKS.

4. Diskful stateful. We boot the operating system and run program (like MatLAB/Stata/SAS/etc.) off of local disks. Everything else (user files/datasets) is stored on a NAS.

5. We recently switch from Sun Grid Engine to Univa Grid Engine. We have had a very pleasant experience and I recommend it to anyone running SGE (Univa employs most of the people who originally wrote the SGE and uses the same commands to interact).

One of my coworkers is using it as a stand-in for a project she's working on. I don't know how closely it mirrors LSF, but it does the job (or at least what she needs to do with it) and produces equivalent output to LSF. Then again, her jobs are just various process that sleep, no real "work" is being done.

I think she's moved onto working with Torque/Maui and Slurm and trying to interface with them cleanly.

I'll be demoing online FT using Charm++, our runtime system, at the Illinois PCI booth (#3506) today 1:30-3:30, and hosting a BoF on Charm++ Thursday 12:15-1:15. I'll also be at the Mellanox party tonight :-)

Speaking of Openlava (opensource LSF), a company (Teraproc) has backed the project and is helping with dev work.So openlava v3 (now with 100% more fairshare and preemption) is in beta and will be released soon. All the code is in the github openlava project page so you can compile your own build if need be. Its good to see this project take off as the world can always use a free version of LSF.